Test Report: Docker_Linux_containerd 13812

                    
                      afb3956fdbde357e4baa0f8617bfd5a64bad6558:2022-04-12:23465
                    
                

Test fail (17/259)

x
+
TestPause/serial/Start (491.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20220412195428-42006 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:80: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p pause-20220412195428-42006 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: exit status 80 (8m9.199615017s)

                                                
                                                
-- stdout --
	* [pause-20220412195428-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node pause-20220412195428-42006 in cluster pause-20220412195428-42006
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-20220412195428-42006" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Your cgroup does not allow setting memory.
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname pause-20220412195428-42006 --name pause-20220412195428-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-20220412195428-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=pause-20220412195428-42006 --network pause-20220412195428-42006 --ip 192.168.67.2 --volume pause-20220412195428-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: exit status 125
	stdout:
	f9d0991530d0581b50e510758083539a7e482f67eb3b3d1a5507dfc7ef305bba
	
	stderr:
	docker: Error response from daemon: network pause-20220412195428-42006 not found.
	
	E0412 19:58:03.552730  200789 docker.go:186] "Failed to stop" err=<
		sudo service docker.socket stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.socket.service: Unit docker.socket.service not loaded.
	 > service="docker.socket"
	E0412 19:58:04.008773  200789 docker.go:189] "Failed to stop" err=<
		sudo service docker.service stop: Process exited with status 5
		stdout:
		
		stderr:
		Failed to stop docker.service.service: Unit docker.service.service not loaded.
	 > service="docker.service"
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-linux-amd64 start -p pause-20220412195428-42006 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-20220412195428-42006
helpers_test.go:235: (dbg) docker inspect pause-20220412195428-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d",
	        "Created": "2022-04-12T19:57:59.319875771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 227184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T19:57:59.700960531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/hostname",
	        "HostsPath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/hosts",
	        "LogPath": "/var/lib/docker/containers/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d/74a6b7630f45d60e04f17825446af310c17607e932fd7f7a83faa7e41e18b28d-json.log",
	        "Name": "/pause-20220412195428-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20220412195428-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20220412195428-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74/merged",
	                "UpperDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74/diff",
	                "WorkDir": "/var/lib/docker/overlay2/77d0be22511bacc209e251f4dcc4a6bae6e4d3088c7edd8ca98ecb8ee3188f74/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20220412195428-42006",
	                "Source": "/var/lib/docker/volumes/pause-20220412195428-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20220412195428-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20220412195428-42006",
	                "name.minikube.sigs.k8s.io": "pause-20220412195428-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f75bb928e8ac027a49c4dc78bb37c6cdee8489247947ecc90db35c496d71abf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49377"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49376"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49373"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49375"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49374"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0f75bb928e8a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20220412195428-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "74a6b7630f45",
	                        "pause-20220412195428-42006"
	                    ],
	                    "NetworkID": "e0e8680058858d6f3017b8c830a2946a7333d8bdab094ded846fda14f9ccfd15",
	                    "EndpointID": "d50e791c3b92160155181af9dd3b2783ec53a3ec635f7aeae2f0cb5d5b09bbd9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20220412195428-42006 -n pause-20220412195428-42006
helpers_test.go:244: <<< TestPause/serial/Start FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Start]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-20220412195428-42006 logs -n 25
helpers_test.go:252: TestPause/serial/Start logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                   |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                      | missing-upgrade-20220412195111-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:41 UTC | Tue, 12 Apr 2022 19:53:40 UTC |
	|         | missing-upgrade-20220412195111-42006    |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --driver=docker                    |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| start   | -p                                      | kubernetes-upgrade-20220412195142-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:48 UTC | Tue, 12 Apr 2022 19:53:43 UTC |
	|         | kubernetes-upgrade-20220412195142-42006 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0       |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| delete  | -p                                      | missing-upgrade-20220412195111-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:40 UTC | Tue, 12 Apr 2022 19:53:44 UTC |
	|         | missing-upgrade-20220412195111-42006    |                                         |         |         |                               |                               |
	| start   | -p                                      | running-upgrade-20220412195256-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:40 UTC | Tue, 12 Apr 2022 19:54:20 UTC |
	|         | running-upgrade-20220412195256-42006    |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | -v=1 --driver=docker                    |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| delete  | -p                                      | running-upgrade-20220412195256-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:20 UTC | Tue, 12 Apr 2022 19:54:23 UTC |
	|         | running-upgrade-20220412195256-42006    |                                         |         |         |                               |                               |
	| start   | -p                                      | kubernetes-upgrade-20220412195142-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:43 UTC | Tue, 12 Apr 2022 19:54:25 UTC |
	|         | kubernetes-upgrade-20220412195142-42006 |                                         |         |         |                               |                               |
	|         | --memory=2200                           |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0       |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=docker  |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| delete  | -p                                      | kubernetes-upgrade-20220412195142-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:26 UTC | Tue, 12 Apr 2022 19:54:29 UTC |
	|         | kubernetes-upgrade-20220412195142-42006 |                                         |         |         |                               |                               |
	| start   | -p                                      | cert-options-20220412195344-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:53:44 UTC | Tue, 12 Apr 2022 19:54:33 UTC |
	|         | cert-options-20220412195344-42006       |                                         |         |         |                               |                               |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --apiserver-ips=127.0.0.1               |                                         |         |         |                               |                               |
	|         | --apiserver-ips=192.168.15.15           |                                         |         |         |                               |                               |
	|         | --apiserver-names=localhost             |                                         |         |         |                               |                               |
	|         | --apiserver-names=www.google.com        |                                         |         |         |                               |                               |
	|         | --apiserver-port=8555                   |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| -p      | cert-options-20220412195344-42006       | cert-options-20220412195344-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:33 UTC | Tue, 12 Apr 2022 19:54:34 UTC |
	|         | ssh openssl x509 -text -noout -in       |                                         |         |         |                               |                               |
	|         | /var/lib/minikube/certs/apiserver.crt   |                                         |         |         |                               |                               |
	| ssh     | -p                                      | cert-options-20220412195344-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:34 UTC | Tue, 12 Apr 2022 19:54:34 UTC |
	|         | cert-options-20220412195344-42006       |                                         |         |         |                               |                               |
	|         | -- sudo cat                             |                                         |         |         |                               |                               |
	|         | /etc/kubernetes/admin.conf              |                                         |         |         |                               |                               |
	| delete  | -p                                      | cert-options-20220412195344-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:34 UTC | Tue, 12 Apr 2022 19:54:42 UTC |
	|         | cert-options-20220412195344-42006       |                                         |         |         |                               |                               |
	| start   | -p auto-20220412195201-42006            | auto-20220412195201-42006               | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:29 UTC | Tue, 12 Apr 2022 19:55:30 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| ssh     | -p auto-20220412195201-42006            | auto-20220412195201-42006               | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:30 UTC | Tue, 12 Apr 2022 19:55:31 UTC |
	|         | pgrep -a kubelet                        |                                         |         |         |                               |                               |
	| delete  | -p auto-20220412195201-42006            | auto-20220412195201-42006               | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:45 UTC | Tue, 12 Apr 2022 19:55:47 UTC |
	| start   | -p                                      | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:42 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
	|         | custom-weave-20220412195203-42006       |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                               |                               |
	|         | --cni=testdata/weavenet.yaml            |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| ssh     | -p                                      | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:57 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
	|         | custom-weave-20220412195203-42006       |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                        |                                         |         |         |                               |                               |
	| start   | -p                                      | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:03 UTC | Tue, 12 Apr 2022 19:56:06 UTC |
	|         | cert-expiration-20220412195203-42006    |                                         |         |         |                               |                               |
	|         | --memory=2048 --cert-expiration=3m      |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| delete  | -p                                      | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:56:06 UTC | Tue, 12 Apr 2022 19:56:09 UTC |
	|         | custom-weave-20220412195203-42006       |                                         |         |         |                               |                               |
	| start   | -p cilium-20220412195203-42006          | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:47 UTC | Tue, 12 Apr 2022 19:57:10 UTC |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --alsologtostderr                       |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                               |                               |
	|         | --cni=cilium --driver=docker            |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| ssh     | -p cilium-20220412195203-42006          | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:15 UTC | Tue, 12 Apr 2022 19:57:15 UTC |
	|         | pgrep -a kubelet                        |                                         |         |         |                               |                               |
	| delete  | -p cilium-20220412195203-42006          | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:26 UTC | Tue, 12 Apr 2022 19:57:29 UTC |
	| start   | -p                                      | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:29 UTC | Tue, 12 Apr 2022 19:58:30 UTC |
	|         | enable-default-cni-20220412195202-42006 |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr         |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m           |                                         |         |         |                               |                               |
	|         | --enable-default-cni=true               |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| ssh     | -p                                      | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:58:31 UTC | Tue, 12 Apr 2022 19:58:31 UTC |
	|         | enable-default-cni-20220412195202-42006 |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                        |                                         |         |         |                               |                               |
	| start   | -p                                      | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:06 UTC | Tue, 12 Apr 2022 19:59:21 UTC |
	|         | cert-expiration-20220412195203-42006    |                                         |         |         |                               |                               |
	|         | --memory=2048                           |                                         |         |         |                               |                               |
	|         | --cert-expiration=8760h                 |                                         |         |         |                               |                               |
	|         | --driver=docker                         |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd          |                                         |         |         |                               |                               |
	| delete  | -p                                      | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:21 UTC | Tue, 12 Apr 2022 19:59:24 UTC |
	|         | cert-expiration-20220412195203-42006    |                                         |         |         |                               |                               |
	|---------|-----------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:59:24
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 19:59:24.334098  234625 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:59:24.334239  234625 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:59:24.334252  234625 out.go:310] Setting ErrFile to fd 2...
	I0412 19:59:24.334260  234625 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:59:24.334387  234625 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:59:24.334683  234625 out.go:304] Setting JSON to false
	I0412 19:59:24.336564  234625 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9718,"bootTime":1649783847,"procs":934,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:59:24.336639  234625 start.go:125] virtualization: kvm guest
	I0412 19:59:24.339497  234625 out.go:176] * [kindnet-20220412195202-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:59:24.341050  234625 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:59:24.339701  234625 notify.go:193] Checking for updates...
	I0412 19:59:24.342602  234625 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:59:24.344189  234625 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:59:24.345784  234625 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:59:24.347397  234625 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:59:24.347890  234625 config.go:178] Loaded profile config "calico-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:24.347994  234625 config.go:178] Loaded profile config "enable-default-cni-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:24.348140  234625 config.go:178] Loaded profile config "pause-20220412195428-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:24.348206  234625 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:59:24.394733  234625 docker.go:137] docker version: linux-20.10.14
	I0412 19:59:24.394842  234625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:59:24.495597  234625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 19:59:24.426483159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:59:24.495701  234625 docker.go:254] overlay module found
	I0412 19:59:24.498059  234625 out.go:176] * Using the docker driver based on user configuration
	I0412 19:59:24.498101  234625 start.go:284] selected driver: docker
	I0412 19:59:24.498109  234625 start.go:801] validating driver "docker" against <nil>
	I0412 19:59:24.498154  234625 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:59:24.498233  234625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:59:24.498258  234625 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 19:59:24.499962  234625 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:59:24.500690  234625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:59:24.600012  234625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 19:59:24.531537467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:59:24.600181  234625 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:59:24.600379  234625 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 19:59:24.602706  234625 out.go:176] * Using Docker driver with the root privilege
	I0412 19:59:24.602738  234625 cni.go:93] Creating CNI manager for "kindnet"
	I0412 19:59:24.602753  234625 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 19:59:24.602762  234625 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 19:59:24.602775  234625 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0412 19:59:24.602791  234625 start_flags.go:306] config:
	{Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:59:24.604933  234625 out.go:176] * Starting control plane node kindnet-20220412195202-42006 in cluster kindnet-20220412195202-42006
	I0412 19:59:24.605003  234625 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 19:59:24.606597  234625 out.go:176] * Pulling base image ...
	I0412 19:59:24.606630  234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:59:24.606673  234625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 19:59:24.606687  234625 cache.go:57] Caching tarball of preloaded images
	I0412 19:59:24.606723  234625 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:59:24.606991  234625 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 19:59:24.607011  234625 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 19:59:24.607155  234625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json ...
	I0412 19:59:24.607189  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json: {Name:mk96c1d1e18e9cc0d948a88792a7261621bb1906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:24.657122  234625 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:59:24.657151  234625 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 19:59:24.657174  234625 cache.go:206] Successfully downloaded all kic artifacts
	I0412 19:59:24.657214  234625 start.go:352] acquiring machines lock for kindnet-20220412195202-42006: {Name:mk9278724d41a33f689e63fe04712fa9ece6a9db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 19:59:24.657383  234625 start.go:356] acquired machines lock for "kindnet-20220412195202-42006" in 129.688µs
	I0412 19:59:24.657415  234625 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 19:59:24.657537  234625 start.go:131] createHost starting for "" (driver="docker")
	I0412 19:59:23.217220  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:25.218123  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:27.717321  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:24.392933  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:26.893188  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:24.660324  234625 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0412 19:59:24.660584  234625 start.go:165] libmachine.API.Create for "kindnet-20220412195202-42006" (driver="docker")
	I0412 19:59:24.660619  234625 client.go:168] LocalClient.Create starting
	I0412 19:59:24.660700  234625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 19:59:24.660743  234625 main.go:134] libmachine: Decoding PEM data...
	I0412 19:59:24.660767  234625 main.go:134] libmachine: Parsing certificate...
	I0412 19:59:24.660848  234625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 19:59:24.660881  234625 main.go:134] libmachine: Decoding PEM data...
	I0412 19:59:24.660901  234625 main.go:134] libmachine: Parsing certificate...
	I0412 19:59:24.661225  234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 19:59:24.694938  234625 cli_runner.go:211] docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 19:59:24.695024  234625 network_create.go:272] running [docker network inspect kindnet-20220412195202-42006] to gather additional debugging logs...
	I0412 19:59:24.695052  234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006
	W0412 19:59:24.730811  234625 cli_runner.go:211] docker network inspect kindnet-20220412195202-42006 returned with exit code 1
	I0412 19:59:24.730843  234625 network_create.go:275] error running [docker network inspect kindnet-20220412195202-42006]: docker network inspect kindnet-20220412195202-42006: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220412195202-42006
	I0412 19:59:24.730878  234625 network_create.go:277] output of [docker network inspect kindnet-20220412195202-42006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220412195202-42006
	
	** /stderr **
	I0412 19:59:24.730940  234625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:59:24.768260  234625 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3941532cd703 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:87:d3:29:2b}}
	I0412 19:59:24.768721  234625 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6a56a3e6bf06 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9a:ff:38:75}}
	I0412 19:59:24.769301  234625 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000c22240] misses:0}
	I0412 19:59:24.769343  234625 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 19:59:24.769356  234625 network_create.go:115] attempt to create docker network kindnet-20220412195202-42006 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0412 19:59:24.769429  234625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220412195202-42006
	I0412 19:59:24.841511  234625 network_create.go:99] docker network kindnet-20220412195202-42006 192.168.67.0/24 created
	I0412 19:59:24.841545  234625 kic.go:106] calculated static IP "192.168.67.2" for the "kindnet-20220412195202-42006" container
	I0412 19:59:24.841619  234625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 19:59:24.877293  234625 cli_runner.go:164] Run: docker volume create kindnet-20220412195202-42006 --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --label created_by.minikube.sigs.k8s.io=true
	I0412 19:59:24.915458  234625 oci.go:103] Successfully created a docker volume kindnet-20220412195202-42006
	I0412 19:59:24.915539  234625 cli_runner.go:164] Run: docker run --rm --name kindnet-20220412195202-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --entrypoint /usr/bin/test -v kindnet-20220412195202-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 19:59:25.504270  234625 oci.go:107] Successfully prepared a docker volume kindnet-20220412195202-42006
	I0412 19:59:25.504323  234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:59:25.504354  234625 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 19:59:25.504427  234625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220412195202-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 19:59:29.717968  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:32.218157  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:29.391612  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:31.392017  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:33.394289  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:33.135503  234625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220412195202-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.630958725s)
	I0412 19:59:33.135546  234625 kic.go:188] duration metric: took 7.631188 seconds to extract preloaded images to volume
	W0412 19:59:33.135597  234625 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 19:59:33.135612  234625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 19:59:33.135684  234625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 19:59:33.236242  234625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220412195202-42006 --name kindnet-20220412195202-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --network kindnet-20220412195202-42006 --ip 192.168.67.2 --volume kindnet-20220412195202-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 19:59:33.700774  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Running}}
	I0412 19:59:33.772841  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 19:59:33.814140  234625 cli_runner.go:164] Run: docker exec kindnet-20220412195202-42006 stat /var/lib/dpkg/alternatives/iptables
	I0412 19:59:33.885208  234625 oci.go:279] the created container "kindnet-20220412195202-42006" has a running status.
	I0412 19:59:33.885243  234625 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa...
	I0412 19:59:33.988927  234625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 19:59:34.095658  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 19:59:34.154504  234625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 19:59:34.154534  234625 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220412195202-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 19:59:34.265172  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 19:59:34.303689  234625 machine.go:88] provisioning docker machine ...
	I0412 19:59:34.303737  234625 ubuntu.go:169] provisioning hostname "kindnet-20220412195202-42006"
	I0412 19:59:34.303791  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:34.717995  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:37.216943  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:35.892247  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:38.392656  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:34.342549  234625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:59:34.342769  234625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49382 <nil> <nil>}
	I0412 19:59:34.342791  234625 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-20220412195202-42006 && echo "kindnet-20220412195202-42006" | sudo tee /etc/hostname
	I0412 19:59:34.478710  234625 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220412195202-42006
	
	I0412 19:59:34.478797  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:34.514508  234625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:59:34.514696  234625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49382 <nil> <nil>}
	I0412 19:59:34.514729  234625 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220412195202-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220412195202-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220412195202-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 19:59:34.636254  234625 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 19:59:34.636282  234625 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 19:59:34.636301  234625 ubuntu.go:177] setting up certificates
	I0412 19:59:34.636310  234625 provision.go:83] configureAuth start
	I0412 19:59:34.636356  234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
	I0412 19:59:34.670840  234625 provision.go:138] copyHostCerts
	I0412 19:59:34.670908  234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 19:59:34.670921  234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 19:59:34.670988  234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 19:59:34.671081  234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 19:59:34.671096  234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 19:59:34.671123  234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 19:59:34.671173  234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 19:59:34.671181  234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 19:59:34.671204  234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 19:59:34.671242  234625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220412195202-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220412195202-42006]
	I0412 19:59:34.782478  234625 provision.go:172] copyRemoteCerts
	I0412 19:59:34.782544  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 19:59:34.782579  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:34.817760  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:34.906211  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 19:59:34.925349  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0412 19:59:34.947214  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 19:59:34.966787  234625 provision.go:86] duration metric: configureAuth took 330.462021ms
	I0412 19:59:34.966815  234625 ubuntu.go:193] setting minikube options for container-runtime
	I0412 19:59:34.967000  234625 config.go:178] Loaded profile config "kindnet-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:34.967013  234625 machine.go:91] provisioned docker machine in 663.294289ms
	I0412 19:59:34.967019  234625 client.go:171] LocalClient.Create took 10.306388857s
	I0412 19:59:34.967034  234625 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220412195202-42006" took 10.306453895s
	I0412 19:59:34.967049  234625 start.go:306] post-start starting for "kindnet-20220412195202-42006" (driver="docker")
	I0412 19:59:34.967060  234625 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 19:59:34.967107  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 19:59:34.967146  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.006426  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.096908  234625 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 19:59:35.100043  234625 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 19:59:35.100113  234625 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 19:59:35.100132  234625 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 19:59:35.100141  234625 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 19:59:35.100154  234625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 19:59:35.100216  234625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 19:59:35.100289  234625 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 19:59:35.100388  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 19:59:35.108243  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 19:59:35.128335  234625 start.go:309] post-start completed in 161.261633ms
	I0412 19:59:35.128743  234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
	I0412 19:59:35.163301  234625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json ...
	I0412 19:59:35.163570  234625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:59:35.163614  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.199687  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.289368  234625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 19:59:35.293975  234625 start.go:134] duration metric: createHost completed in 10.636420263s
	I0412 19:59:35.294008  234625 start.go:81] releasing machines lock for "kindnet-20220412195202-42006", held for 10.636608341s
	I0412 19:59:35.294107  234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
	I0412 19:59:35.329324  234625 ssh_runner.go:195] Run: systemctl --version
	I0412 19:59:35.329391  234625 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 19:59:35.329396  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.329451  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.366712  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.370262  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.452540  234625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 19:59:35.475848  234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 19:59:35.486091  234625 docker.go:183] disabling docker service ...
	I0412 19:59:35.486153  234625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 19:59:35.503897  234625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 19:59:35.514103  234625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 19:59:35.602325  234625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 19:59:35.682686  234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 19:59:35.693997  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 19:59:35.709312  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 19:59:35.726756  234625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 19:59:35.734723  234625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 19:59:35.741966  234625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 19:59:35.855077  234625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 19:59:35.927565  234625 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 19:59:35.927640  234625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 19:59:35.931767  234625 start.go:462] Will wait 60s for crictl version
	I0412 19:59:35.931829  234625 ssh_runner.go:195] Run: sudo crictl version
	I0412 19:59:35.959625  234625 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T19:59:35Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 19:59:39.717113  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:41.718117  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:40.891783  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:42.892174  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:47.007016  234625 ssh_runner.go:195] Run: sudo crictl version
	I0412 19:59:47.035718  234625 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 19:59:47.035789  234625 ssh_runner.go:195] Run: containerd --version
	I0412 19:59:47.057937  234625 ssh_runner.go:195] Run: containerd --version
	I0412 19:59:47.083583  234625 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 19:59:47.083694  234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:59:47.119300  234625 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0412 19:59:47.122851  234625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:59:44.217319  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:46.717677  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:47.134888  234625 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 19:59:47.134973  234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:59:47.135033  234625 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 19:59:47.161492  234625 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 19:59:47.161517  234625 containerd.go:521] Images already preloaded, skipping extraction
	I0412 19:59:47.161562  234625 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 19:59:47.186488  234625 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 19:59:47.186513  234625 cache_images.go:84] Images are preloaded, skipping loading
	I0412 19:59:47.186577  234625 ssh_runner.go:195] Run: sudo crictl info
	I0412 19:59:47.212894  234625 cni.go:93] Creating CNI manager for "kindnet"
	I0412 19:59:47.212932  234625 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 19:59:47.212953  234625 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220412195202-42006 NodeName:kindnet-20220412195202-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 19:59:47.213114  234625 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kindnet-20220412195202-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 19:59:47.213218  234625 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kindnet-20220412195202-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0412 19:59:47.213284  234625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 19:59:47.221668  234625 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 19:59:47.221744  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 19:59:47.229345  234625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0412 19:59:47.244031  234625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 19:59:47.257717  234625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0412 19:59:47.271915  234625 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0412 19:59:47.275046  234625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:59:47.285681  234625 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006 for IP: 192.168.67.2
	I0412 19:59:47.285815  234625 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 19:59:47.285882  234625 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 19:59:47.285948  234625 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key
	I0412 19:59:47.285980  234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt with IP's: []
	I0412 19:59:47.707380  234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt ...
	I0412 19:59:47.707423  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt: {Name:mk5059b3c4fae947bb1fc99c8693ca8f2b5e9668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.707679  234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key ...
	I0412 19:59:47.707699  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key: {Name:mk6c27fac79f3772ad8e270e49ba33e4795e15de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.707842  234625 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e
	I0412 19:59:47.707864  234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 19:59:47.835182  234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e ...
	I0412 19:59:47.835214  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e: {Name:mk9e6b042dbd3040132f0c6e4fc317c376013de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.835433  234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e ...
	I0412 19:59:47.835450  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e: {Name:mk0670b8a49acf77375ca4180f2f6a38616b9c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.835571  234625 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt
	I0412 19:59:47.835658  234625 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key
	I0412 19:59:47.835719  234625 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key
	I0412 19:59:47.835740  234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt with IP's: []
	I0412 19:59:48.032648  234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt ...
	I0412 19:59:48.032682  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt: {Name:mk528ca3c8cae5bc77058b8b0d4389c64b0ac73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:48.032906  234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key ...
	I0412 19:59:48.032923  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key: {Name:mkcae74fa4c12fae2d02c0880924d829f627972c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:48.033184  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 19:59:48.033241  234625 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 19:59:48.033258  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 19:59:48.033316  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 19:59:48.033350  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 19:59:48.033383  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 19:59:48.033438  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 19:59:48.034144  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 19:59:48.055187  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 19:59:48.075056  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 19:59:48.095916  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 19:59:48.116341  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 19:59:48.135114  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 19:59:48.154103  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 19:59:48.173233  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 19:59:48.192800  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 19:59:48.212546  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 19:59:48.233026  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 19:59:48.251632  234625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 19:59:48.266099  234625 ssh_runner.go:195] Run: openssl version
	I0412 19:59:48.271402  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 19:59:48.279695  234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 19:59:48.283066  234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 19:59:48.283119  234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 19:59:48.288470  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 19:59:48.296579  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 19:59:48.305946  234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:59:48.309733  234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:59:48.309797  234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:59:48.315491  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 19:59:48.323461  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 19:59:48.331682  234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 19:59:48.335099  234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 19:59:48.335158  234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 19:59:48.340576  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 19:59:48.348569  234625 kubeadm.go:391] StartCluster: {Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:59:48.348663  234625 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 19:59:48.348705  234625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 19:59:48.373690  234625 cri.go:87] found id: ""
	I0412 19:59:48.373763  234625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 19:59:48.381689  234625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 19:59:48.390331  234625 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 19:59:48.390395  234625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 19:59:48.398073  234625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 19:59:48.398143  234625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 19:59:44.892323  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:47.391500  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:48.676509  234625 out.go:203]   - Generating certificates and keys ...
	I0412 19:59:48.717877  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:51.218596  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:49.892672  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:52.391793  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:51.433028  234625 out.go:203]   - Booting up control plane ...
	I0412 19:59:53.718296  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:56.217829  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 19:59:54.392132  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:56.392172  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:58.392867  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:58.218169  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:00.717695  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:02.717883  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:03.478717  234625 out.go:203]   - Configuring RBAC rules ...
	I0412 20:00:03.893499  234625 cni.go:93] Creating CNI manager for "kindnet"
	I0412 20:00:00.892434  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:03.392852  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:03.895818  234625 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:00:03.895907  234625 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:00:03.899812  234625 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:00:03.899838  234625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:00:03.913929  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:00:05.219805  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:07.717338  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:05.892834  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:07.893420  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:04.692692  234625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:00:04.692766  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:04.692774  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=kindnet-20220412195202-42006 minikube.k8s.io/updated_at=2022_04_12T20_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:04.786887  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:04.786942  234625 ops.go:34] apiserver oom_adj: -16
	I0412 20:00:05.348474  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:05.848261  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:06.347958  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:06.848142  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:07.348534  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:07.848181  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:08.348252  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:08.848242  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:09.717617  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:12.217985  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:10.392569  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:12.893358  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:09.348718  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:09.848435  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:10.348189  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:10.848205  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:11.348276  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:11.847965  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:12.348072  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:12.848241  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:13.348206  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:13.847960  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:14.348831  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:14.848686  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:15.348733  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:15.847949  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:16.348332  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:16.422308  234625 kubeadm.go:1020] duration metric: took 11.729581193s to wait for elevateKubeSystemPrivileges.
	I0412 20:00:16.422402  234625 kubeadm.go:393] StartCluster complete in 28.073846211s
	I0412 20:00:16.422430  234625 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:00:16.422559  234625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:00:16.424828  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:00:16.945845  234625 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220412195202-42006" rescaled to 1
	I0412 20:00:16.945920  234625 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:00:16.947880  234625 out.go:176] * Verifying Kubernetes components...
	I0412 20:00:16.947946  234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:00:16.945962  234625 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 20:00:16.946039  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:00:16.946209  234625 config.go:178] Loaded profile config "kindnet-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:00:16.948060  234625 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220412195202-42006"
	I0412 20:00:16.948137  234625 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220412195202-42006"
	I0412 20:00:16.948148  234625 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220412195202-42006"
	I0412 20:00:16.948171  234625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220412195202-42006"
	W0412 20:00:16.948152  234625 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:00:16.948301  234625 host.go:66] Checking if "kindnet-20220412195202-42006" exists ...
	I0412 20:00:16.948605  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 20:00:16.948824  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 20:00:16.994055  234625 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:00:16.994187  234625 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:00:16.994201  234625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:00:16.994256  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 20:00:16.996443  234625 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220412195202-42006"
	W0412 20:00:16.996486  234625 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:00:16.996527  234625 host.go:66] Checking if "kindnet-20220412195202-42006" exists ...
	I0412 20:00:16.997174  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 20:00:17.030079  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:00:17.031701  234625 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220412195202-42006" to be "Ready" ...
	I0412 20:00:17.035075  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 20:00:17.041458  234625 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:00:17.041486  234625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:00:17.041543  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 20:00:17.080438  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 20:00:17.193530  234625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:00:17.195131  234625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:00:17.294685  234625 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0412 20:00:14.717593  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:16.717777  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:15.391553  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:17.393407  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:17.612049  234625 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 20:00:17.612127  234625 addons.go:417] enableAddons completed in 666.177991ms
	I0412 20:00:19.038275  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:18.717902  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:21.217896  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:19.892385  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:22.391892  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:21.038649  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:23.538578  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:23.717571  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:26.217565  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:24.392661  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:26.891680  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:28.892137  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:26.038437  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:28.538627  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:28.717803  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:31.217481  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:31.391697  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:33.891480  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:31.038447  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:33.538307  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:33.717464  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:36.217669  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:35.893182  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:38.392498  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:35.538917  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:38.038927  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:38.717159  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:40.717855  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:40.891596  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:42.892711  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:40.538521  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:42.540527  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:43.217256  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:45.217852  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:47.717122  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:45.391842  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:47.891765  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:45.038334  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:47.038391  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:49.717797  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:52.217675  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:49.892024  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:51.892307  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:49.538324  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:51.538974  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:54.038323  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:54.717564  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:57.217842  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:00:54.391535  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:56.392469  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:58.892155  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:56.038611  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:58.539241  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:59.717546  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:01.718124  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:01.391754  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:02.897309  215682 pod_ready.go:81] duration metric: took 4m0.072489065s waiting for pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace to be "Ready" ...
	E0412 20:01:02.897340  215682 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 20:01:02.897351  215682 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-rp9nw" in "kube-system" namespace to be "Ready" ...
	I0412 20:01:01.038645  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:03.038739  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:04.217482  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:06.717647  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:04.910641  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:07.409695  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:05.039226  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:07.538297  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:08.717806  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:11.217926  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:09.411216  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:11.910496  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:09.538495  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:11.538805  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:14.038511  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:13.716895  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:15.717087  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:17.717899  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:14.409641  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:16.409674  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:18.409945  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:16.038744  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:18.539026  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:20.217613  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:22.217978  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:20.409978  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:22.410212  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:21.039163  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:23.538809  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:24.717538  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:26.718036  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:24.910097  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:26.911338  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:25.538960  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:27.539080  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:29.217786  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:31.717219  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:29.409357  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:31.410241  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:33.909935  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:30.038178  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:32.038980  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:34.217387  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:36.717576  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:36.410475  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:38.910091  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:34.538822  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:37.038790  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:39.217155  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:41.717921  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:41.409568  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:43.410362  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:39.538195  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:41.538778  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:44.038722  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:44.217153  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:46.217484  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:45.410662  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:47.909438  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:46.539146  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:49.038295  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:48.217798  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:50.217902  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:52.718052  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:49.910205  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:52.409746  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:51.038758  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:53.039071  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:55.217066  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:57.217692  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:01:54.410475  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:56.910213  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:58.910650  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:55.539116  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:58.038934  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:59.717349  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:01.718045  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:01.409592  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:03.410035  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:00.039044  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:02.539085  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:04.217477  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:06.217876  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:05.910262  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:08.409777  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:05.039182  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:07.538476  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:08.717347  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:10.717910  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:10.410013  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:12.410056  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:09.538679  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:12.038785  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:14.038818  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:13.218046  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:15.717348  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:14.910778  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:16.911702  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:16.538735  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:19.038825  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:18.217618  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:20.717594  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:22.717754  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:19.409449  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:21.410665  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:23.910445  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:21.039094  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:23.538266  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:25.217365  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:27.717700  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:25.910686  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:28.409680  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:25.539402  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:28.039025  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:30.217534  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:32.717420  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:30.909521  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:32.910092  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:30.538544  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:33.038931  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:34.717695  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:36.717943  200789 node_ready.go:58] node "pause-20220412195428-42006" has status "Ready":"False"
	I0412 20:02:37.220203  200789 node_ready.go:38] duration metric: took 4m0.0098666s waiting for node "pause-20220412195428-42006" to be "Ready" ...
	I0412 20:02:37.222618  200789 out.go:176] 
	W0412 20:02:37.222763  200789 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:02:37.222775  200789 out.go:241] * 
	W0412 20:02:37.223467  200789 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	b08f5ef3bae50       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   a1e50dee04f41
	bde184ab19256       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   a1e50dee04f41
	770f07872e71d       3c53fa8541f95       4 minutes ago        Running             kube-proxy                0                   e6d238531ecc9
	a5102a9c6c188       884d49d6d8c9f       4 minutes ago        Running             kube-scheduler            0                   9bc604b175965
	12297e4242865       3fc1d62d65872       4 minutes ago        Running             kube-apiserver            0                   86f3034b1ab0c
	ec3584dd3bc99       b0c9e5e4dbb14       4 minutes ago        Running             kube-controller-manager   0                   148ffee7343df
	ab8e5cd14558c       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   c5b276b6c7036
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 19:58:00 UTC, end at Tue 2022-04-12 20:02:38 UTC. --
	Apr 12 19:58:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:17.986386893Z" level=info msg="StartContainer for \"ec3584dd3bc996c2e709a0d7c44be4c02546fcd782f152d395deb6d890efa53b\" returns successfully"
	Apr 12 19:58:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:17.986427490Z" level=info msg="StartContainer for \"12297e42428653f65289acbe7149d83b7948bcaef5f91622ac1b42b6cff89754\" returns successfully"
	Apr 12 19:58:35 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:35.789614369Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.943408223Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-mkwvw,Uid:4d1d82b2-0635-445b-8f4b-862f04d00f43,Namespace:kube-system,Attempt:0,}"
	Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.943933976Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-bc6md,Uid:64b79d06-dc7d-4efd-b7d7-89cdc366440f,Namespace:kube-system,Attempt:0,}"
	Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.965269922Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3 pid=1973
	Apr 12 19:58:36 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:36.966395681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281 pid=1982
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.038140310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mkwvw,Uid:4d1d82b2-0635-445b-8f4b-862f04d00f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281\""
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.041003708Z" level=info msg="CreateContainer within sandbox \"e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.057550796Z" level=info msg="CreateContainer within sandbox \"e6d238531ecc918bd10620565de3f3202d87ded47d6d6b535a10f79ed7588281\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7\""
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.058233627Z" level=info msg="StartContainer for \"770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7\""
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.185038647Z" level=info msg="StartContainer for \"770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7\" returns successfully"
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.284785276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-bc6md,Uid:64b79d06-dc7d-4efd-b7d7-89cdc366440f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\""
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.288309053Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.304897861Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff\""
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.305523966Z" level=info msg="StartContainer for \"bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff\""
	Apr 12 19:58:37 pause-20220412195428-42006 containerd[516]: time="2022-04-12T19:58:37.599360566Z" level=info msg="StartContainer for \"bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff\" returns successfully"
	Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.828601256Z" level=info msg="shim disconnected" id=bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff
	Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.828671599Z" level=warning msg="cleaning up after shim disconnected" id=bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff namespace=k8s.io
	Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.828683294Z" level=info msg="cleaning up dead shim"
	Apr 12 20:01:17 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:17.839643364Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:01:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2317\n"
	Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.706913410Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.723493956Z" level=info msg="CreateContainer within sandbox \"a1e50dee04f41d10c759c550ce442081dd17264e49ef63e5630139841bc468f3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"b08f5ef3bae5064ce05e3300f832dc204db6541b93779b8153ce918133be9ee5\""
	Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.724570525Z" level=info msg="StartContainer for \"b08f5ef3bae5064ce05e3300f832dc204db6541b93779b8153ce918133be9ee5\""
	Apr 12 20:01:18 pause-20220412195428-42006 containerd[516]: time="2022-04-12T20:01:18.884198752Z" level=info msg="StartContainer for \"b08f5ef3bae5064ce05e3300f832dc204db6541b93779b8153ce918133be9ee5\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20220412195428-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20220412195428-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=pause-20220412195428-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T19_58_25_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 19:58:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20220412195428-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:02:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 19:58:34 +0000   Tue, 12 Apr 2022 19:58:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 19:58:34 +0000   Tue, 12 Apr 2022 19:58:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 19:58:34 +0000   Tue, 12 Apr 2022 19:58:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 19:58:34 +0000   Tue, 12 Apr 2022 19:58:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-20220412195428-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                f7bfddc0-fa9a-494c-85f2-66b8e6c42fb6
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                  ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-pause-20220412195428-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m14s
	  kube-system                 kindnet-bc6md                                         100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-pause-20220412195428-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-controller-manager-pause-20220412195428-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	  kube-system                 kube-proxy-mkwvw                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-pause-20220412195428-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m14s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s  kubelet     Node pause-20220412195428-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s  kubelet     Node pause-20220412195428-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s  kubelet     Node pause-20220412195428-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +2.947870] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.019798] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.023930] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[ +17.927324] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.019424] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.019947] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[Apr12 20:02] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.007834] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.023920] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +2.967928] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.031787] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	[  +1.027962] IPv4: martian source 10.244.0.43 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e ec d3 66 df 4a 08 06
	
	* 
	* ==> etcd [ab8e5cd14558cb29546efe15f9215efe57017c2193e7f6646140863c3dee6124] <==
	* {"level":"info","ts":"2022-04-12T19:58:18.090Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T19:58:18.093Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2022-04-12T19:58:18.318Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T19:58:18.319Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T19:58:18.319Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T19:58:18.320Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T19:58:18.321Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T19:58:18.321Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2022-04-12T19:58:18.322Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-20220412195428-42006 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T19:59:32.440Z","caller":"traceutil/trace.go:171","msg":"trace[803619546] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"109.877762ms","start":"2022-04-12T19:59:32.330Z","end":"2022-04-12T19:59:32.440Z","steps":["trace[803619546] 'process raft request'  (duration: 61.958105ms)","trace[803619546] 'compare'  (duration: 47.821616ms)"],"step_count":2}
	
	* 
	* ==> kernel <==
	*  20:02:38 up  2:45,  0 users,  load average: 0.28, 1.70, 2.02
	Linux pause-20220412195428-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [12297e42428653f65289acbe7149d83b7948bcaef5f91622ac1b42b6cff89754] <==
	* I0412 19:58:20.883314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 19:58:20.883331       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 19:58:20.885171       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 19:58:20.885186       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0412 19:58:20.900979       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 19:58:20.903622       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0412 19:58:21.737857       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 19:58:21.737886       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 19:58:21.742601       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 19:58:21.745735       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 19:58:21.745757       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 19:58:22.163015       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 19:58:22.205407       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 19:58:22.332552       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 19:58:22.340173       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0412 19:58:22.341375       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 19:58:22.345504       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 19:58:22.910532       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 19:58:24.017738       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 19:58:24.027682       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 19:58:24.037816       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 19:58:24.286568       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 19:58:36.614124       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 19:58:36.665133       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 19:58:37.317616       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [ec3584dd3bc996c2e709a0d7c44be4c02546fcd782f152d395deb6d890efa53b] <==
	* I0412 19:58:35.962032       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0412 19:58:35.962192       1 shared_informer.go:247] Caches are synced for endpoint 
	I0412 19:58:35.964587       1 shared_informer.go:247] Caches are synced for job 
	I0412 19:58:35.966551       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 19:58:35.966551       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0412 19:58:35.967517       1 event.go:294] "Event occurred" object="kube-system/etcd-pause-20220412195428-42006" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0412 19:58:35.969672       1 event.go:294] "Event occurred" object="kube-system/kube-apiserver-pause-20220412195428-42006" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0412 19:58:35.970876       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 19:58:35.971816       1 shared_informer.go:247] Caches are synced for GC 
	I0412 19:58:35.986622       1 shared_informer.go:247] Caches are synced for HPA 
	I0412 19:58:35.995901       1 shared_informer.go:247] Caches are synced for stateful set 
	I0412 19:58:36.012836       1 shared_informer.go:247] Caches are synced for deployment 
	I0412 19:58:36.012924       1 shared_informer.go:247] Caches are synced for attach detach 
	I0412 19:58:36.015198       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 19:58:36.015220       1 disruption.go:371] Sending events to api server.
	I0412 19:58:36.388338       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 19:58:36.411463       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 19:58:36.411493       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 19:58:36.620006       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mkwvw"
	I0412 19:58:36.621916       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bc6md"
	I0412 19:58:36.667214       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 19:58:36.686526       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 19:58:36.767012       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9vvk4"
	I0412 19:58:36.771633       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gc8l7"
	I0412 19:58:36.793349       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-9vvk4"
	
	* 
	* ==> kube-proxy [770f07872e71d6d38b13569ebee277110b4fab80e2db256bf4bef5989eb88ef7] <==
	* I0412 19:58:37.228735       1 node.go:163] Successfully retrieved node IP: 192.168.76.2
	I0412 19:58:37.228819       1 server_others.go:138] "Detected node IP" address="192.168.76.2"
	I0412 19:58:37.228864       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 19:58:37.313916       1 server_others.go:206] "Using iptables Proxier"
	I0412 19:58:37.313955       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 19:58:37.313967       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 19:58:37.313997       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 19:58:37.314465       1 server.go:656] "Version info" version="v1.23.5"
	I0412 19:58:37.315515       1 config.go:226] "Starting endpoint slice config controller"
	I0412 19:58:37.315539       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 19:58:37.315562       1 config.go:317] "Starting service config controller"
	I0412 19:58:37.315567       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 19:58:37.416490       1 shared_informer.go:247] Caches are synced for service config 
	I0412 19:58:37.416525       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [a5102a9c6c18880f279c76b5b41a685ac2be3dca5038c7565237cec6b8c986b9] <==
	* W0412 19:58:20.899632       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 19:58:20.899646       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 19:58:20.900156       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 19:58:20.900258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 19:58:20.900376       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 19:58:20.900403       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0412 19:58:20.900468       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 19:58:20.900489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0412 19:58:20.900769       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0412 19:58:20.900795       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0412 19:58:21.713827       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 19:58:21.713866       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 19:58:21.729292       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 19:58:21.729328       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 19:58:21.733510       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 19:58:21.733557       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 19:58:21.765763       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 19:58:21.765798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0412 19:58:21.788461       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 19:58:21.788510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0412 19:58:21.847302       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 19:58:21.847341       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 19:58:21.910933       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 19:58:21.910985       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0412 19:58:22.492327       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 19:58:00 UTC, end at Tue 2022-04-12 20:02:38 UTC. --
	Apr 12 20:00:39 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:39.659836    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:00:44 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:44.660928    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:00:49 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:49.661798    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:00:54 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:54.662747    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:00:59 pause-20220412195428-42006 kubelet[1538]: E0412 20:00:59.664392    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:04 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:04.665445    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:09 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:09.666793    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:14 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:14.668163    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:18 pause-20220412195428-42006 kubelet[1538]: I0412 20:01:18.704743    1538 scope.go:110] "RemoveContainer" containerID="bde184ab192563d64e5990ae00d60ba4f2da3d2d6f3a2a313bd2c9bfc04623ff"
	Apr 12 20:01:19 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:19.669922    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:24 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:24.670800    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:29 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:29.672355    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:34 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:34.673354    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:39 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:39.675128    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:44 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:44.676332    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:49 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:49.677013    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:54 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:54.678718    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:01:59 pause-20220412195428-42006 kubelet[1538]: E0412 20:01:59.680439    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:04 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:04.681729    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:09 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:09.682646    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:14 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:14.683977    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:19 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:19.685022    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:24 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:24.686236    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:29 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:29.687429    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:02:34 pause-20220412195428-42006 kubelet[1538]: E0412 20:02:34.688474    1538 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-20220412195428-42006 -n pause-20220412195428-42006
helpers_test.go:261: (dbg) Run:  kubectl --context pause-20220412195428-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-gc8l7
helpers_test.go:272: ======> post-mortem[TestPause/serial/Start]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context pause-20220412195428-42006 describe pod coredns-64897985d-gc8l7
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context pause-20220412195428-42006 describe pod coredns-64897985d-gc8l7: exit status 1 (61.167307ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-gc8l7" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context pause-20220412195428-42006 describe pod coredns-64897985d-gc8l7: exit status 1
--- FAIL: TestPause/serial/Start (491.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (533.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20220412195203-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-20220412195203-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: exit status 80 (8m53.872555643s)

                                                
                                                
-- stdout --
	* [calico-20220412195203-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node calico-20220412195203-42006 in cluster calico-20220412195203-42006
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:56:09.130527  215682 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:56:09.130631  215682 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:56:09.130640  215682 out.go:310] Setting ErrFile to fd 2...
	I0412 19:56:09.130645  215682 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:56:09.130738  215682 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:56:09.131019  215682 out.go:304] Setting JSON to false
	I0412 19:56:09.132794  215682 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9522,"bootTime":1649783847,"procs":1187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:56:09.132876  215682 start.go:125] virtualization: kvm guest
	I0412 19:56:09.135590  215682 out.go:176] * [calico-20220412195203-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:56:09.137102  215682 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:56:09.135811  215682 notify.go:193] Checking for updates...
	I0412 19:56:09.138446  215682 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:56:09.139832  215682 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:56:09.141181  215682 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:56:09.142469  215682 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:56:09.142988  215682 config.go:178] Loaded profile config "cert-expiration-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:56:09.143084  215682 config.go:178] Loaded profile config "cilium-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:56:09.143167  215682 config.go:178] Loaded profile config "pause-20220412195428-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:56:09.143209  215682 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:56:09.186324  215682 docker.go:137] docker version: linux-20.10.14
	I0412 19:56:09.186450  215682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:56:09.282382  215682 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 19:56:09.216809843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:56:09.282488  215682 docker.go:254] overlay module found
	I0412 19:56:09.284983  215682 out.go:176] * Using the docker driver based on user configuration
	I0412 19:56:09.285030  215682 start.go:284] selected driver: docker
	I0412 19:56:09.285038  215682 start.go:801] validating driver "docker" against <nil>
	I0412 19:56:09.285060  215682 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:56:09.285112  215682 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:56:09.285135  215682 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:56:09.286895  215682 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:56:09.287592  215682 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:56:09.382689  215682 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 19:56:09.319246509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:56:09.382822  215682 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:56:09.382984  215682 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 19:56:09.385145  215682 out.go:176] * Using Docker driver with the root privilege
	I0412 19:56:09.385186  215682 cni.go:93] Creating CNI manager for "calico"
	I0412 19:56:09.385205  215682 start_flags.go:301] Found "Calico" CNI - setting NetworkPlugin=cni
	I0412 19:56:09.385225  215682 start_flags.go:306] config:
	{Name:calico-20220412195203-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220412195203-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:56:09.387018  215682 out.go:176] * Starting control plane node calico-20220412195203-42006 in cluster calico-20220412195203-42006
	I0412 19:56:09.387069  215682 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 19:56:09.388569  215682 out.go:176] * Pulling base image ...
	I0412 19:56:09.388617  215682 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:56:09.388660  215682 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 19:56:09.388680  215682 cache.go:57] Caching tarball of preloaded images
	I0412 19:56:09.388713  215682 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:56:09.388939  215682 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 19:56:09.388954  215682 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 19:56:09.389084  215682 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/config.json ...
	I0412 19:56:09.389107  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/config.json: {Name:mk3160cc1b959eb41027a997a37c03ef6cd1d061 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:09.435788  215682 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:56:09.435828  215682 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 19:56:09.435846  215682 cache.go:206] Successfully downloaded all kic artifacts
	I0412 19:56:09.435894  215682 start.go:352] acquiring machines lock for calico-20220412195203-42006: {Name:mk81b28685927e637ebc8087fa6da1d9f7ae553f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 19:56:09.436035  215682 start.go:356] acquired machines lock for "calico-20220412195203-42006" in 119.399µs
	I0412 19:56:09.436064  215682 start.go:91] Provisioning new machine with config: &{Name:calico-20220412195203-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220412195203-42006 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 19:56:09.436200  215682 start.go:131] createHost starting for "" (driver="docker")
	I0412 19:56:09.438583  215682 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0412 19:56:09.438822  215682 start.go:165] libmachine.API.Create for "calico-20220412195203-42006" (driver="docker")
	I0412 19:56:09.438850  215682 client.go:168] LocalClient.Create starting
	I0412 19:56:09.438919  215682 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 19:56:09.438980  215682 main.go:134] libmachine: Decoding PEM data...
	I0412 19:56:09.439005  215682 main.go:134] libmachine: Parsing certificate...
	I0412 19:56:09.439047  215682 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 19:56:09.439066  215682 main.go:134] libmachine: Decoding PEM data...
	I0412 19:56:09.439078  215682 main.go:134] libmachine: Parsing certificate...
	I0412 19:56:09.439411  215682 cli_runner.go:164] Run: docker network inspect calico-20220412195203-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 19:56:09.472260  215682 cli_runner.go:211] docker network inspect calico-20220412195203-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 19:56:09.472367  215682 network_create.go:272] running [docker network inspect calico-20220412195203-42006] to gather additional debugging logs...
	I0412 19:56:09.472398  215682 cli_runner.go:164] Run: docker network inspect calico-20220412195203-42006
	W0412 19:56:09.507191  215682 cli_runner.go:211] docker network inspect calico-20220412195203-42006 returned with exit code 1
	I0412 19:56:09.507230  215682 network_create.go:275] error running [docker network inspect calico-20220412195203-42006]: docker network inspect calico-20220412195203-42006: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20220412195203-42006
	I0412 19:56:09.507249  215682 network_create.go:277] output of [docker network inspect calico-20220412195203-42006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20220412195203-42006
	
	** /stderr **
	I0412 19:56:09.507314  215682 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:56:09.542089  215682 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-24c689ae021e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:74:c1:5b}}
	I0412 19:56:09.542544  215682 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0006ce600] misses:0}
	I0412 19:56:09.542583  215682 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 19:56:09.542600  215682 network_create.go:115] attempt to create docker network calico-20220412195203-42006 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0412 19:56:09.542664  215682 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20220412195203-42006
	I0412 19:56:09.615537  215682 network_create.go:99] docker network calico-20220412195203-42006 192.168.58.0/24 created
	I0412 19:56:09.615574  215682 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20220412195203-42006" container
	I0412 19:56:09.615628  215682 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 19:56:09.651665  215682 cli_runner.go:164] Run: docker volume create calico-20220412195203-42006 --label name.minikube.sigs.k8s.io=calico-20220412195203-42006 --label created_by.minikube.sigs.k8s.io=true
	I0412 19:56:09.685269  215682 oci.go:103] Successfully created a docker volume calico-20220412195203-42006
	I0412 19:56:09.685362  215682 cli_runner.go:164] Run: docker run --rm --name calico-20220412195203-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220412195203-42006 --entrypoint /usr/bin/test -v calico-20220412195203-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 19:56:10.258142  215682 oci.go:107] Successfully prepared a docker volume calico-20220412195203-42006
	I0412 19:56:10.258220  215682 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:56:10.258256  215682 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 19:56:10.258335  215682 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220412195203-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 19:56:17.935391  215682 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20220412195203-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.676976742s)
	I0412 19:56:17.935431  215682 kic.go:188] duration metric: took 7.677170 seconds to extract preloaded images to volume
	W0412 19:56:17.935488  215682 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 19:56:17.935504  215682 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 19:56:17.935584  215682 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 19:56:18.041721  215682 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20220412195203-42006 --name calico-20220412195203-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20220412195203-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20220412195203-42006 --network calico-20220412195203-42006 --ip 192.168.58.2 --volume calico-20220412195203-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 19:56:18.490225  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Running}}
	I0412 19:56:18.530734  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Status}}
	I0412 19:56:18.567666  215682 cli_runner.go:164] Run: docker exec calico-20220412195203-42006 stat /var/lib/dpkg/alternatives/iptables
	I0412 19:56:18.643845  215682 oci.go:279] the created container "calico-20220412195203-42006" has a running status.
	I0412 19:56:18.643885  215682 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa...
	I0412 19:56:18.838152  215682 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 19:56:18.938790  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Status}}
	I0412 19:56:18.975773  215682 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 19:56:18.975799  215682 kic_runner.go:114] Args: [docker exec --privileged calico-20220412195203-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 19:56:19.072709  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Status}}
	I0412 19:56:19.109595  215682 machine.go:88] provisioning docker machine ...
	I0412 19:56:19.109640  215682 ubuntu.go:169] provisioning hostname "calico-20220412195203-42006"
	I0412 19:56:19.109692  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:19.142391  215682 main.go:134] libmachine: Using SSH client type: native
	I0412 19:56:19.142573  215682 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49367 <nil> <nil>}
	I0412 19:56:19.142601  215682 main.go:134] libmachine: About to run SSH command:
	sudo hostname calico-20220412195203-42006 && echo "calico-20220412195203-42006" | sudo tee /etc/hostname
	I0412 19:56:19.274903  215682 main.go:134] libmachine: SSH cmd err, output: <nil>: calico-20220412195203-42006
	
	I0412 19:56:19.274994  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:19.311157  215682 main.go:134] libmachine: Using SSH client type: native
	I0412 19:56:19.311347  215682 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49367 <nil> <nil>}
	I0412 19:56:19.311375  215682 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20220412195203-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20220412195203-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20220412195203-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 19:56:19.428661  215682 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 19:56:19.428704  215682 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 19:56:19.428732  215682 ubuntu.go:177] setting up certificates
	I0412 19:56:19.428745  215682 provision.go:83] configureAuth start
	I0412 19:56:19.428833  215682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220412195203-42006
	I0412 19:56:19.464379  215682 provision.go:138] copyHostCerts
	I0412 19:56:19.464438  215682 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 19:56:19.464449  215682 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 19:56:19.464521  215682 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 19:56:19.464628  215682 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 19:56:19.464640  215682 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 19:56:19.464676  215682 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 19:56:19.464752  215682 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 19:56:19.464765  215682 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 19:56:19.464798  215682 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 19:56:19.464853  215682 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.calico-20220412195203-42006 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20220412195203-42006]
	I0412 19:56:19.619939  215682 provision.go:172] copyRemoteCerts
	I0412 19:56:19.620006  215682 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 19:56:19.620041  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:19.653346  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:56:19.740543  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 19:56:19.759284  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0412 19:56:19.777638  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 19:56:19.798387  215682 provision.go:86] duration metric: configureAuth took 369.62431ms
	I0412 19:56:19.798417  215682 ubuntu.go:193] setting minikube options for container-runtime
	I0412 19:56:19.798613  215682 config.go:178] Loaded profile config "calico-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:56:19.798630  215682 machine.go:91] provisioned docker machine in 689.010527ms
	I0412 19:56:19.798639  215682 client.go:171] LocalClient.Create took 10.359782894s
	I0412 19:56:19.798659  215682 start.go:173] duration metric: libmachine.API.Create for "calico-20220412195203-42006" took 10.359837107s
	I0412 19:56:19.798679  215682 start.go:306] post-start starting for "calico-20220412195203-42006" (driver="docker")
	I0412 19:56:19.798694  215682 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 19:56:19.798750  215682 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 19:56:19.798797  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:19.833531  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:56:19.920145  215682 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 19:56:19.923772  215682 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 19:56:19.923814  215682 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 19:56:19.923839  215682 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 19:56:19.923849  215682 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 19:56:19.923871  215682 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 19:56:19.923934  215682 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 19:56:19.924029  215682 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 19:56:19.924165  215682 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 19:56:19.931399  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 19:56:19.949570  215682 start.go:309] post-start completed in 150.865561ms
	I0412 19:56:19.949974  215682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220412195203-42006
	I0412 19:56:19.984988  215682 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/config.json ...
	I0412 19:56:19.985253  215682 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:56:19.985294  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:20.022123  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:56:20.112816  215682 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 19:56:20.116888  215682 start.go:134] duration metric: createHost completed in 10.680669912s
	I0412 19:56:20.116918  215682 start.go:81] releasing machines lock for "calico-20220412195203-42006", held for 10.680867032s
	I0412 19:56:20.117026  215682 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20220412195203-42006
	I0412 19:56:20.149855  215682 ssh_runner.go:195] Run: systemctl --version
	I0412 19:56:20.149920  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:20.149939  215682 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 19:56:20.150040  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:56:20.185942  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:56:20.186399  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:56:20.274189  215682 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 19:56:20.310210  215682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 19:56:20.324598  215682 docker.go:183] disabling docker service ...
	I0412 19:56:20.324667  215682 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 19:56:20.344047  215682 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 19:56:20.354175  215682 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 19:56:20.458847  215682 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 19:56:20.585317  215682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 19:56:20.598110  215682 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 19:56:20.616520  215682 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0LmQiCiAgICAgIGNvbmZfdGVtcGxhdGUgPSAiIgogICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5XQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9yc10KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIucmVnaXN0cnkubWlycm9ycy4iZG9ja2VyLmlvIl0KICAgICAgICAgIGVuZHBvaW50ID0gWyJodHRwczovL3JlZ2lzdHJ5LTEuZG9ja2VyLmlvIl0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5zZXJ2aWNlLnYxLmRpZmYtc2VydmljZSJdCiAgICBkZWZhdWx0ID0gWyJ3YWxraW5nIl0KICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5nYy52MS5zY2hlZHVsZXIiXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 19:56:20.633557  215682 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 19:56:20.640789  215682 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 19:56:20.647888  215682 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 19:56:20.736558  215682 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 19:56:20.812161  215682 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 19:56:20.812244  215682 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 19:56:20.816998  215682 start.go:462] Will wait 60s for crictl version
	I0412 19:56:20.817092  215682 ssh_runner.go:195] Run: sudo crictl version
	I0412 19:56:20.845438  215682 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T19:56:20Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 19:56:31.892213  215682 ssh_runner.go:195] Run: sudo crictl version
	I0412 19:56:31.917011  215682 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 19:56:31.917068  215682 ssh_runner.go:195] Run: containerd --version
	I0412 19:56:31.937821  215682 ssh_runner.go:195] Run: containerd --version
	I0412 19:56:31.961201  215682 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 19:56:31.961277  215682 cli_runner.go:164] Run: docker network inspect calico-20220412195203-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:56:31.994296  215682 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0412 19:56:31.998147  215682 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:56:32.009068  215682 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:56:32.009128  215682 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 19:56:32.035433  215682 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 19:56:32.035457  215682 containerd.go:521] Images already preloaded, skipping extraction
	I0412 19:56:32.035500  215682 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 19:56:32.060707  215682 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 19:56:32.060732  215682 cache_images.go:84] Images are preloaded, skipping loading
	I0412 19:56:32.060776  215682 ssh_runner.go:195] Run: sudo crictl info
	I0412 19:56:32.087295  215682 cni.go:93] Creating CNI manager for "calico"
	I0412 19:56:32.087321  215682 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 19:56:32.087336  215682 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20220412195203-42006 NodeName:calico-20220412195203-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 19:56:32.087457  215682 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "calico-20220412195203-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 19:56:32.087533  215682 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=calico-20220412195203-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:calico-20220412195203-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0412 19:56:32.087581  215682 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 19:56:32.095180  215682 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 19:56:32.095261  215682 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 19:56:32.102992  215682 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (541 bytes)
	I0412 19:56:32.116433  215682 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 19:56:32.129970  215682 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2056 bytes)
	I0412 19:56:32.144753  215682 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0412 19:56:32.148009  215682 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:56:32.158056  215682 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006 for IP: 192.168.58.2
	I0412 19:56:32.158177  215682 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 19:56:32.158216  215682 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 19:56:32.158267  215682 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/client.key
	I0412 19:56:32.158285  215682 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/client.crt with IP's: []
	I0412 19:56:32.238387  215682 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/client.crt ...
	I0412 19:56:32.238427  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/client.crt: {Name:mke50b81c788799ae130f77304956c6f67100289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:32.238643  215682 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/client.key ...
	I0412 19:56:32.238658  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/client.key: {Name:mk67cc60b60a6323f9388722aa8a90d359e2468a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:32.238749  215682 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.key.cee25041
	I0412 19:56:32.238769  215682 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 19:56:32.332193  215682 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.crt.cee25041 ...
	I0412 19:56:32.332227  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.crt.cee25041: {Name:mk927ee91d606b82b19203d8599364e86d5346ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:32.332434  215682 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.key.cee25041 ...
	I0412 19:56:32.332448  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.key.cee25041: {Name:mk6010e5474209b9f077ef9ce87607a5bf76e332 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:32.332535  215682 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.crt
	I0412 19:56:32.332589  215682 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.key
	I0412 19:56:32.332636  215682 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.key
	I0412 19:56:32.332651  215682 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.crt with IP's: []
	I0412 19:56:32.514223  215682 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.crt ...
	I0412 19:56:32.514268  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.crt: {Name:mkd8901c6e0d957944946f4d212453df27d978ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:32.514488  215682 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.key ...
	I0412 19:56:32.514503  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.key: {Name:mk49d65c1f5b34f680b78195a80540f5b9299137 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:56:32.514691  215682 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 19:56:32.514736  215682 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 19:56:32.514749  215682 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 19:56:32.514771  215682 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 19:56:32.514798  215682 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 19:56:32.514821  215682 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 19:56:32.514909  215682 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 19:56:32.515582  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 19:56:32.535942  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0412 19:56:32.556037  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 19:56:32.575340  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/calico-20220412195203-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 19:56:32.595258  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 19:56:32.614126  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 19:56:32.632910  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 19:56:32.650535  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 19:56:32.668221  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 19:56:32.686330  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 19:56:32.705205  215682 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 19:56:32.725402  215682 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 19:56:32.738809  215682 ssh_runner.go:195] Run: openssl version
	I0412 19:56:32.743959  215682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 19:56:32.752242  215682 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 19:56:32.755459  215682 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 19:56:32.755504  215682 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 19:56:32.760534  215682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 19:56:32.767981  215682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 19:56:32.775928  215682 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:56:32.779228  215682 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:56:32.779298  215682 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:56:32.784998  215682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 19:56:32.793041  215682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 19:56:32.801122  215682 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 19:56:32.804642  215682 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 19:56:32.804706  215682 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 19:56:32.809965  215682 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 19:56:32.817466  215682 kubeadm.go:391] StartCluster: {Name:calico-20220412195203-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:calico-20220412195203-42006 Namespace:default APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false}
	I0412 19:56:32.817571  215682 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 19:56:32.817652  215682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 19:56:32.843982  215682 cri.go:87] found id: ""
	I0412 19:56:32.844049  215682 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 19:56:32.851708  215682 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 19:56:32.858723  215682 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 19:56:32.858783  215682 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 19:56:32.865696  215682 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 19:56:32.865762  215682 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 19:56:47.914189  215682 out.go:203]   - Generating certificates and keys ...
	I0412 19:56:47.917764  215682 out.go:203]   - Booting up control plane ...
	I0412 19:56:47.920809  215682 out.go:203]   - Configuring RBAC rules ...
	I0412 19:56:47.922853  215682 cni.go:93] Creating CNI manager for "calico"
	I0412 19:56:47.924777  215682 out.go:176] * Configuring Calico (Container Networking Interface) ...
	I0412 19:56:47.925045  215682 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 19:56:47.925070  215682 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (202049 bytes)
	I0412 19:56:47.942361  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 19:56:49.383636  215682 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.441236235s)
	I0412 19:56:49.383686  215682 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 19:56:49.383783  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:49.383788  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=calico-20220412195203-42006 minikube.k8s.io/updated_at=2022_04_12T19_56_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:49.392800  215682 ops.go:34] apiserver oom_adj: -16
	I0412 19:56:49.454390  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:50.044221  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:50.544313  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:51.044382  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:51.543982  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:52.044454  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:52.543915  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:53.044303  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:53.544572  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:54.043889  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:54.543626  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:55.043947  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:55.543943  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:56.043801  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:56.544196  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:57.044317  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:57.544557  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:58.043598  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:58.543824  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:59.044553  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:56:59.543825  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:57:00.043643  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:57:00.543900  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:57:01.044593  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:57:01.544176  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:57:02.043680  215682 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 19:57:02.126112  215682 kubeadm.go:1020] duration metric: took 12.742393136s to wait for elevateKubeSystemPrivileges.
	I0412 19:57:02.126142  215682 kubeadm.go:393] StartCluster complete in 29.308690386s
	I0412 19:57:02.126162  215682 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:57:02.126280  215682 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:57:02.127512  215682 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:57:02.645370  215682 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20220412195203-42006" rescaled to 1
	I0412 19:57:02.645424  215682 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 19:57:02.647782  215682 out.go:176] * Verifying Kubernetes components...
	I0412 19:57:02.647849  215682 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:57:02.645479  215682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 19:57:02.645496  215682 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 19:57:02.645702  215682 config.go:178] Loaded profile config "calico-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:57:02.647988  215682 addons.go:65] Setting storage-provisioner=true in profile "calico-20220412195203-42006"
	I0412 19:57:02.648012  215682 addons.go:153] Setting addon storage-provisioner=true in "calico-20220412195203-42006"
	W0412 19:57:02.648024  215682 addons.go:165] addon storage-provisioner should already be in state true
	I0412 19:57:02.648054  215682 addons.go:65] Setting default-storageclass=true in profile "calico-20220412195203-42006"
	I0412 19:57:02.648110  215682 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20220412195203-42006"
	I0412 19:57:02.648113  215682 host.go:66] Checking if "calico-20220412195203-42006" exists ...
	I0412 19:57:02.648539  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Status}}
	I0412 19:57:02.648624  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Status}}
	I0412 19:57:02.695223  215682 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 19:57:02.695391  215682 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 19:57:02.695412  215682 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 19:57:02.695481  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:57:02.700389  215682 addons.go:153] Setting addon default-storageclass=true in "calico-20220412195203-42006"
	W0412 19:57:02.700430  215682 addons.go:165] addon default-storageclass should already be in state true
	I0412 19:57:02.700480  215682 host.go:66] Checking if "calico-20220412195203-42006" exists ...
	I0412 19:57:02.701016  215682 cli_runner.go:164] Run: docker container inspect calico-20220412195203-42006 --format={{.State.Status}}
	I0412 19:57:02.752982  215682 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 19:57:02.753017  215682 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 19:57:02.753077  215682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20220412195203-42006
	I0412 19:57:02.754924  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:57:02.792236  215682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49367 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/calico-20220412195203-42006/id_rsa Username:docker}
	I0412 19:57:02.806439  215682 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 19:57:02.807854  215682 node_ready.go:35] waiting up to 5m0s for node "calico-20220412195203-42006" to be "Ready" ...
	I0412 19:57:02.811792  215682 node_ready.go:49] node "calico-20220412195203-42006" has status "Ready":"True"
	I0412 19:57:02.811821  215682 node_ready.go:38] duration metric: took 3.932543ms waiting for node "calico-20220412195203-42006" to be "Ready" ...
	I0412 19:57:02.811835  215682 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 19:57:02.824778  215682 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace to be "Ready" ...
	I0412 19:57:03.082352  215682 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 19:57:03.102004  215682 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 19:57:04.301234  215682 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.218838563s)
	I0412 19:57:04.301340  215682 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.494817216s)
	I0412 19:57:04.301362  215682 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0412 19:57:04.380376  215682 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.278273056s)
	I0412 19:57:04.382825  215682 out.go:176] * Enabled addons: default-storageclass, storage-provisioner
	I0412 19:57:04.382862  215682 addons.go:417] enableAddons completed in 1.737368496s
	I0412 19:57:04.892015  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:06.898073  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:09.391615  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:11.391956  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:13.891876  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:15.892610  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:17.893204  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:20.392253  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:22.891979  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:24.892281  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:27.391347  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:29.392134  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:31.392756  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:34.051982  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:36.391753  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:38.393007  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:40.891192  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:42.891477  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:45.391311  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:47.392173  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:49.891558  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:51.892732  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:53.893292  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:56.392527  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:57:58.892159  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:01.392017  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:03.392123  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:05.892797  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:08.392506  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:10.891873  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:13.392528  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:15.891904  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:17.892972  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:20.392427  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:22.392540  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:24.392883  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:26.892636  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:29.392113  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:31.891419  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:33.892342  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:36.392701  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:38.891999  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:41.391287  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:43.391507  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:45.391586  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:47.892042  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:49.892220  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:51.892336  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:54.391222  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:56.392730  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:58:58.392868  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:00.891500  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:02.891975  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:05.391717  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:07.392229  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:09.894318  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:12.391810  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:14.391866  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:16.892008  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:19.391669  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:21.896000  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:24.392933  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:26.893188  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:29.391612  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:31.392017  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:33.394289  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:35.892247  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:38.392656  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:40.891783  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:42.892174  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:44.892323  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:47.391500  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:49.892672  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:52.391793  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:54.392132  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:56.392172  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 19:59:58.392867  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:00.892434  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:03.392852  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:05.892834  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:07.893420  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:10.392569  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:12.893358  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:15.391553  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:17.393407  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:19.892385  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:22.391892  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:24.392661  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:26.891680  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:28.892137  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:31.391697  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:33.891480  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:35.893182  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:38.392498  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:40.891596  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:42.892711  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:45.391842  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:47.891765  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:49.892024  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:51.892307  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:54.391535  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:56.392469  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:00:58.892155  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:01.391754  215682 pod_ready.go:102] pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:02.897309  215682 pod_ready.go:81] duration metric: took 4m0.072489065s waiting for pod "calico-kube-controllers-8594699699-bnsh8" in "kube-system" namespace to be "Ready" ...
	E0412 20:01:02.897340  215682 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 20:01:02.897351  215682 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-rp9nw" in "kube-system" namespace to be "Ready" ...
	I0412 20:01:04.910641  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:07.409695  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:09.411216  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:11.910496  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:14.409641  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:16.409674  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:18.409945  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:20.409978  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:22.410212  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:24.910097  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:26.911338  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:29.409357  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:31.410241  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:33.909935  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:36.410475  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:38.910091  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:41.409568  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:43.410362  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:45.410662  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:47.909438  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:49.910205  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:52.409746  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:54.410475  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:56.910213  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:01:58.910650  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:01.409592  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:03.410035  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:05.910262  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:08.409777  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:10.410013  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:12.410056  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:14.910778  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:16.911702  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:19.409449  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:21.410665  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:23.910445  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:25.910686  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:28.409680  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:30.909521  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:32.910092  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:34.910750  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:37.409749  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:39.410274  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:41.410693  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:43.411305  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:45.910552  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:48.409809  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:50.909978  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:52.910665  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:55.410456  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:57.909593  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:02:59.910479  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:02.410225  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:04.909526  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:06.909726  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:08.910650  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:11.410634  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:13.909640  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:15.911873  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:18.410161  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:20.410483  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:22.909994  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:24.910751  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:27.410419  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:29.909653  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:31.910375  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:34.410435  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:36.910772  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:38.913359  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:41.410149  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:43.910323  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:45.910431  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:48.409655  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:50.909910  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:52.910433  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:55.409932  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:57.410332  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:03:59.910047  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:02.410597  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:04.910394  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:07.409596  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:09.910222  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:12.409655  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:14.410999  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:16.910511  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:18.910836  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:21.409862  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:23.911591  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:26.410312  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:28.910862  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:30.911686  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:33.410004  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:35.410566  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:37.909497  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:39.910451  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:41.910743  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:43.911395  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:46.409864  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:48.410075  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:50.411278  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:52.910454  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:54.911941  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:57.411223  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:04:59.910773  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:05:02.411462  215682 pod_ready.go:102] pod "calico-node-rp9nw" in "kube-system" namespace has status "Ready":"False"
	I0412 20:05:02.927702  215682 pod_ready.go:81] duration metric: took 4m0.030332705s waiting for pod "calico-node-rp9nw" in "kube-system" namespace to be "Ready" ...
	E0412 20:05:02.927743  215682 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 20:05:02.927764  215682 pod_ready.go:38] duration metric: took 8m0.115913407s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:05:02.931901  215682 out.go:176] 
	W0412 20:05:02.932109  215682 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	X Exiting due to GUEST_START: wait 5m0s for node: extra waiting: timed out waiting 5m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	W0412 20:05:02.932131  215682 out.go:241] * 
	* 
	W0412 20:05:02.932868  215682 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:05:02.935051  215682 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (533.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (367.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.140701897s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131048001s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.131228542s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.130134207s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136925634s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14234053s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:00:31.558382   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:31.563771   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:31.574066   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:31.594286   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:31.634556   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:31.714902   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:31.875369   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:32.195978   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:32.836835   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:00:34.117955   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:36.678204   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:41.799205   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.136950095s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:00:52.039721   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:00:54.807371   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:00:58.260573   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.265857   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.276229   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.296557   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.336879   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.417257   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.577549   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:58.898408   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:00:59.539365   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:01:00.819873   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:01:03.380228   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:01:05.719995   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:01:08.500480   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:01:12.520230   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137060129s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:01:18.741594   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:01:39.222719   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141523871s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:01:53.480716   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:02:10.366828   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:10.372136   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:10.382413   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:10.402779   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:10.443306   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:10.523633   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:10.684046   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:11.004655   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:11.644917   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.138076599s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:02:12.925989   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:15.486565   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:20.183475   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:20.607225   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:02:30.848215   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:03:02.669797   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:03:14.515546   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:03:15.401377   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.132217707s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:03:32.290122   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:03:42.104670   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context enable-default-cni-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.133590784s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/enable-default-cni/DNS (367.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (292.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20220412195202-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kindnet-20220412195202-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: exit status 80 (4m52.777129102s)

                                                
                                                
-- stdout --
	* [kindnet-20220412195202-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node kindnet-20220412195202-42006 in cluster kindnet-20220412195202-42006
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:59:24.334098  234625 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:59:24.334239  234625 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:59:24.334252  234625 out.go:310] Setting ErrFile to fd 2...
	I0412 19:59:24.334260  234625 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:59:24.334387  234625 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:59:24.334683  234625 out.go:304] Setting JSON to false
	I0412 19:59:24.336564  234625 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9718,"bootTime":1649783847,"procs":934,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:59:24.336639  234625 start.go:125] virtualization: kvm guest
	I0412 19:59:24.339497  234625 out.go:176] * [kindnet-20220412195202-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:59:24.341050  234625 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:59:24.339701  234625 notify.go:193] Checking for updates...
	I0412 19:59:24.342602  234625 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:59:24.344189  234625 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:59:24.345784  234625 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:59:24.347397  234625 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:59:24.347890  234625 config.go:178] Loaded profile config "calico-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:24.347994  234625 config.go:178] Loaded profile config "enable-default-cni-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:24.348140  234625 config.go:178] Loaded profile config "pause-20220412195428-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:24.348206  234625 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:59:24.394733  234625 docker.go:137] docker version: linux-20.10.14
	I0412 19:59:24.394842  234625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:59:24.495597  234625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 19:59:24.426483159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:59:24.495701  234625 docker.go:254] overlay module found
	I0412 19:59:24.498059  234625 out.go:176] * Using the docker driver based on user configuration
	I0412 19:59:24.498101  234625 start.go:284] selected driver: docker
	I0412 19:59:24.498109  234625 start.go:801] validating driver "docker" against <nil>
	I0412 19:59:24.498154  234625 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:59:24.498233  234625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:59:24.498258  234625 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:59:24.499962  234625 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:59:24.500690  234625 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:59:24.600012  234625 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 19:59:24.531537467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:59:24.600181  234625 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:59:24.600379  234625 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 19:59:24.602706  234625 out.go:176] * Using Docker driver with the root privilege
	I0412 19:59:24.602738  234625 cni.go:93] Creating CNI manager for "kindnet"
	I0412 19:59:24.602753  234625 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 19:59:24.602762  234625 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 19:59:24.602775  234625 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0412 19:59:24.602791  234625 start_flags.go:306] config:
	{Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:59:24.604933  234625 out.go:176] * Starting control plane node kindnet-20220412195202-42006 in cluster kindnet-20220412195202-42006
	I0412 19:59:24.605003  234625 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 19:59:24.606597  234625 out.go:176] * Pulling base image ...
	I0412 19:59:24.606630  234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:59:24.606673  234625 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 19:59:24.606687  234625 cache.go:57] Caching tarball of preloaded images
	I0412 19:59:24.606723  234625 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:59:24.606991  234625 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 19:59:24.607011  234625 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 19:59:24.607155  234625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json ...
	I0412 19:59:24.607189  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json: {Name:mk96c1d1e18e9cc0d948a88792a7261621bb1906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:24.657122  234625 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:59:24.657151  234625 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 19:59:24.657174  234625 cache.go:206] Successfully downloaded all kic artifacts
	I0412 19:59:24.657214  234625 start.go:352] acquiring machines lock for kindnet-20220412195202-42006: {Name:mk9278724d41a33f689e63fe04712fa9ece6a9db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 19:59:24.657383  234625 start.go:356] acquired machines lock for "kindnet-20220412195202-42006" in 129.688µs
	I0412 19:59:24.657415  234625 start.go:91] Provisioning new machine with config: &{Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 19:59:24.657537  234625 start.go:131] createHost starting for "" (driver="docker")
	I0412 19:59:24.660324  234625 out.go:203] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0412 19:59:24.660584  234625 start.go:165] libmachine.API.Create for "kindnet-20220412195202-42006" (driver="docker")
	I0412 19:59:24.660619  234625 client.go:168] LocalClient.Create starting
	I0412 19:59:24.660700  234625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 19:59:24.660743  234625 main.go:134] libmachine: Decoding PEM data...
	I0412 19:59:24.660767  234625 main.go:134] libmachine: Parsing certificate...
	I0412 19:59:24.660848  234625 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 19:59:24.660881  234625 main.go:134] libmachine: Decoding PEM data...
	I0412 19:59:24.660901  234625 main.go:134] libmachine: Parsing certificate...
	I0412 19:59:24.661225  234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 19:59:24.694938  234625 cli_runner.go:211] docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 19:59:24.695024  234625 network_create.go:272] running [docker network inspect kindnet-20220412195202-42006] to gather additional debugging logs...
	I0412 19:59:24.695052  234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006
	W0412 19:59:24.730811  234625 cli_runner.go:211] docker network inspect kindnet-20220412195202-42006 returned with exit code 1
	I0412 19:59:24.730843  234625 network_create.go:275] error running [docker network inspect kindnet-20220412195202-42006]: docker network inspect kindnet-20220412195202-42006: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-20220412195202-42006
	I0412 19:59:24.730878  234625 network_create.go:277] output of [docker network inspect kindnet-20220412195202-42006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-20220412195202-42006
	
	** /stderr **
	I0412 19:59:24.730940  234625 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:59:24.768260  234625 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3941532cd703 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:87:d3:29:2b}}
	I0412 19:59:24.768721  234625 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6a56a3e6bf06 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9a:ff:38:75}}
	I0412 19:59:24.769301  234625 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000c22240] misses:0}
	I0412 19:59:24.769343  234625 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 19:59:24.769356  234625 network_create.go:115] attempt to create docker network kindnet-20220412195202-42006 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0412 19:59:24.769429  234625 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kindnet-20220412195202-42006
	I0412 19:59:24.841511  234625 network_create.go:99] docker network kindnet-20220412195202-42006 192.168.67.0/24 created
	I0412 19:59:24.841545  234625 kic.go:106] calculated static IP "192.168.67.2" for the "kindnet-20220412195202-42006" container
	I0412 19:59:24.841619  234625 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 19:59:24.877293  234625 cli_runner.go:164] Run: docker volume create kindnet-20220412195202-42006 --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --label created_by.minikube.sigs.k8s.io=true
	I0412 19:59:24.915458  234625 oci.go:103] Successfully created a docker volume kindnet-20220412195202-42006
	I0412 19:59:24.915539  234625 cli_runner.go:164] Run: docker run --rm --name kindnet-20220412195202-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --entrypoint /usr/bin/test -v kindnet-20220412195202-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 19:59:25.504270  234625 oci.go:107] Successfully prepared a docker volume kindnet-20220412195202-42006
	I0412 19:59:25.504323  234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:59:25.504354  234625 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 19:59:25.504427  234625 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220412195202-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 19:59:33.135503  234625 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-20220412195202-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.630958725s)
	I0412 19:59:33.135546  234625 kic.go:188] duration metric: took 7.631188 seconds to extract preloaded images to volume
	W0412 19:59:33.135597  234625 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 19:59:33.135612  234625 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 19:59:33.135684  234625 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 19:59:33.236242  234625 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-20220412195202-42006 --name kindnet-20220412195202-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-20220412195202-42006 --network kindnet-20220412195202-42006 --ip 192.168.67.2 --volume kindnet-20220412195202-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 19:59:33.700774  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Running}}
	I0412 19:59:33.772841  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 19:59:33.814140  234625 cli_runner.go:164] Run: docker exec kindnet-20220412195202-42006 stat /var/lib/dpkg/alternatives/iptables
	I0412 19:59:33.885208  234625 oci.go:279] the created container "kindnet-20220412195202-42006" has a running status.
	I0412 19:59:33.885243  234625 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa...
	I0412 19:59:33.988927  234625 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 19:59:34.095658  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 19:59:34.154504  234625 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 19:59:34.154534  234625 kic_runner.go:114] Args: [docker exec --privileged kindnet-20220412195202-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 19:59:34.265172  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 19:59:34.303689  234625 machine.go:88] provisioning docker machine ...
	I0412 19:59:34.303737  234625 ubuntu.go:169] provisioning hostname "kindnet-20220412195202-42006"
	I0412 19:59:34.303791  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:34.342549  234625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:59:34.342769  234625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49382 <nil> <nil>}
	I0412 19:59:34.342791  234625 main.go:134] libmachine: About to run SSH command:
	sudo hostname kindnet-20220412195202-42006 && echo "kindnet-20220412195202-42006" | sudo tee /etc/hostname
	I0412 19:59:34.478710  234625 main.go:134] libmachine: SSH cmd err, output: <nil>: kindnet-20220412195202-42006
	
	I0412 19:59:34.478797  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:34.514508  234625 main.go:134] libmachine: Using SSH client type: native
	I0412 19:59:34.514696  234625 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49382 <nil> <nil>}
	I0412 19:59:34.514729  234625 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-20220412195202-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-20220412195202-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-20220412195202-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 19:59:34.636254  234625 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 19:59:34.636282  234625 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 19:59:34.636301  234625 ubuntu.go:177] setting up certificates
	I0412 19:59:34.636310  234625 provision.go:83] configureAuth start
	I0412 19:59:34.636356  234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
	I0412 19:59:34.670840  234625 provision.go:138] copyHostCerts
	I0412 19:59:34.670908  234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 19:59:34.670921  234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 19:59:34.670988  234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 19:59:34.671081  234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 19:59:34.671096  234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 19:59:34.671123  234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 19:59:34.671173  234625 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 19:59:34.671181  234625 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 19:59:34.671204  234625 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 19:59:34.671242  234625 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.kindnet-20220412195202-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kindnet-20220412195202-42006]
	I0412 19:59:34.782478  234625 provision.go:172] copyRemoteCerts
	I0412 19:59:34.782544  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 19:59:34.782579  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:34.817760  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:34.906211  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 19:59:34.925349  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0412 19:59:34.947214  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 19:59:34.966787  234625 provision.go:86] duration metric: configureAuth took 330.462021ms
	I0412 19:59:34.966815  234625 ubuntu.go:193] setting minikube options for container-runtime
	I0412 19:59:34.967000  234625 config.go:178] Loaded profile config "kindnet-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:59:34.967013  234625 machine.go:91] provisioned docker machine in 663.294289ms
	I0412 19:59:34.967019  234625 client.go:171] LocalClient.Create took 10.306388857s
	I0412 19:59:34.967034  234625 start.go:173] duration metric: libmachine.API.Create for "kindnet-20220412195202-42006" took 10.306453895s
	I0412 19:59:34.967049  234625 start.go:306] post-start starting for "kindnet-20220412195202-42006" (driver="docker")
	I0412 19:59:34.967060  234625 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 19:59:34.967107  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 19:59:34.967146  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.006426  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.096908  234625 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 19:59:35.100043  234625 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 19:59:35.100113  234625 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 19:59:35.100132  234625 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 19:59:35.100141  234625 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 19:59:35.100154  234625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 19:59:35.100216  234625 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 19:59:35.100289  234625 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 19:59:35.100388  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 19:59:35.108243  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 19:59:35.128335  234625 start.go:309] post-start completed in 161.261633ms
	I0412 19:59:35.128743  234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
	I0412 19:59:35.163301  234625 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/config.json ...
	I0412 19:59:35.163570  234625 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:59:35.163614  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.199687  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.289368  234625 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 19:59:35.293975  234625 start.go:134] duration metric: createHost completed in 10.636420263s
	I0412 19:59:35.294008  234625 start.go:81] releasing machines lock for "kindnet-20220412195202-42006", held for 10.636608341s
	I0412 19:59:35.294107  234625 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-20220412195202-42006
	I0412 19:59:35.329324  234625 ssh_runner.go:195] Run: systemctl --version
	I0412 19:59:35.329391  234625 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 19:59:35.329396  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.329451  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 19:59:35.366712  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.370262  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 19:59:35.452540  234625 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 19:59:35.475848  234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 19:59:35.486091  234625 docker.go:183] disabling docker service ...
	I0412 19:59:35.486153  234625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 19:59:35.503897  234625 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 19:59:35.514103  234625 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 19:59:35.602325  234625 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 19:59:35.682686  234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 19:59:35.693997  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 19:59:35.709312  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 19:59:35.726756  234625 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 19:59:35.734723  234625 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 19:59:35.741966  234625 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 19:59:35.855077  234625 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 19:59:35.927565  234625 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 19:59:35.927640  234625 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 19:59:35.931767  234625 start.go:462] Will wait 60s for crictl version
	I0412 19:59:35.931829  234625 ssh_runner.go:195] Run: sudo crictl version
	I0412 19:59:35.959625  234625 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T19:59:35Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 19:59:47.007016  234625 ssh_runner.go:195] Run: sudo crictl version
	I0412 19:59:47.035718  234625 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 19:59:47.035789  234625 ssh_runner.go:195] Run: containerd --version
	I0412 19:59:47.057937  234625 ssh_runner.go:195] Run: containerd --version
	I0412 19:59:47.083583  234625 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 19:59:47.083694  234625 cli_runner.go:164] Run: docker network inspect kindnet-20220412195202-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 19:59:47.119300  234625 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0412 19:59:47.122851  234625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:59:47.134888  234625 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 19:59:47.134973  234625 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:59:47.135033  234625 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 19:59:47.161492  234625 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 19:59:47.161517  234625 containerd.go:521] Images already preloaded, skipping extraction
	I0412 19:59:47.161562  234625 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 19:59:47.186488  234625 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 19:59:47.186513  234625 cache_images.go:84] Images are preloaded, skipping loading
	I0412 19:59:47.186577  234625 ssh_runner.go:195] Run: sudo crictl info
	I0412 19:59:47.212894  234625 cni.go:93] Creating CNI manager for "kindnet"
	I0412 19:59:47.212932  234625 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 19:59:47.212953  234625 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-20220412195202-42006 NodeName:kindnet-20220412195202-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 19:59:47.213114  234625 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kindnet-20220412195202-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 19:59:47.213218  234625 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kindnet-20220412195202-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:}
	I0412 19:59:47.213284  234625 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 19:59:47.221668  234625 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 19:59:47.221744  234625 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 19:59:47.229345  234625 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (573 bytes)
	I0412 19:59:47.244031  234625 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 19:59:47.257717  234625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2057 bytes)
	I0412 19:59:47.271915  234625 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0412 19:59:47.275046  234625 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 19:59:47.285681  234625 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006 for IP: 192.168.67.2
	I0412 19:59:47.285815  234625 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 19:59:47.285882  234625 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 19:59:47.285948  234625 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key
	I0412 19:59:47.285980  234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt with IP's: []
	I0412 19:59:47.707380  234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt ...
	I0412 19:59:47.707423  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.crt: {Name:mk5059b3c4fae947bb1fc99c8693ca8f2b5e9668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.707679  234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key ...
	I0412 19:59:47.707699  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/client.key: {Name:mk6c27fac79f3772ad8e270e49ba33e4795e15de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.707842  234625 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e
	I0412 19:59:47.707864  234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 19:59:47.835182  234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e ...
	I0412 19:59:47.835214  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e: {Name:mk9e6b042dbd3040132f0c6e4fc317c376013de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.835433  234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e ...
	I0412 19:59:47.835450  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e: {Name:mk0670b8a49acf77375ca4180f2f6a38616b9c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:47.835571  234625 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt
	I0412 19:59:47.835658  234625 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key
	I0412 19:59:47.835719  234625 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key
	I0412 19:59:47.835740  234625 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt with IP's: []
	I0412 19:59:48.032648  234625 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt ...
	I0412 19:59:48.032682  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt: {Name:mk528ca3c8cae5bc77058b8b0d4389c64b0ac73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:48.032906  234625 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key ...
	I0412 19:59:48.032923  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key: {Name:mkcae74fa4c12fae2d02c0880924d829f627972c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:59:48.033184  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 19:59:48.033241  234625 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 19:59:48.033258  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 19:59:48.033316  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 19:59:48.033350  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 19:59:48.033383  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 19:59:48.033438  234625 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 19:59:48.034144  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 19:59:48.055187  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 19:59:48.075056  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 19:59:48.095916  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/kindnet-20220412195202-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 19:59:48.116341  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 19:59:48.135114  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 19:59:48.154103  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 19:59:48.173233  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 19:59:48.192800  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 19:59:48.212546  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 19:59:48.233026  234625 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 19:59:48.251632  234625 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 19:59:48.266099  234625 ssh_runner.go:195] Run: openssl version
	I0412 19:59:48.271402  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 19:59:48.279695  234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 19:59:48.283066  234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 19:59:48.283119  234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 19:59:48.288470  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 19:59:48.296579  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 19:59:48.305946  234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:59:48.309733  234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:59:48.309797  234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 19:59:48.315491  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 19:59:48.323461  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 19:59:48.331682  234625 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 19:59:48.335099  234625 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 19:59:48.335158  234625 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 19:59:48.340576  234625 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 19:59:48.348569  234625 kubeadm.go:391] StartCluster: {Name:kindnet-20220412195202-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:kindnet-20220412195202-42006 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:59:48.348663  234625 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 19:59:48.348705  234625 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 19:59:48.373690  234625 cri.go:87] found id: ""
	I0412 19:59:48.373763  234625 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 19:59:48.381689  234625 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 19:59:48.390331  234625 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 19:59:48.390395  234625 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 19:59:48.398073  234625 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 19:59:48.398143  234625 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 19:59:48.676509  234625 out.go:203]   - Generating certificates and keys ...
	I0412 19:59:51.433028  234625 out.go:203]   - Booting up control plane ...
	I0412 20:00:03.478717  234625 out.go:203]   - Configuring RBAC rules ...
	I0412 20:00:03.893499  234625 cni.go:93] Creating CNI manager for "kindnet"
	I0412 20:00:03.895818  234625 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:00:03.895907  234625 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:00:03.899812  234625 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:00:03.899838  234625 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:00:03.913929  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:00:04.692692  234625 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:00:04.692766  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:04.692774  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=kindnet-20220412195202-42006 minikube.k8s.io/updated_at=2022_04_12T20_00_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:04.786887  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:04.786942  234625 ops.go:34] apiserver oom_adj: -16
	I0412 20:00:05.348474  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:05.848261  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:06.347958  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:06.848142  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:07.348534  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:07.848181  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:08.348252  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:08.848242  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:09.348718  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:09.848435  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:10.348189  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:10.848205  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:11.348276  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:11.847965  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:12.348072  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:12.848241  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:13.348206  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:13.847960  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:14.348831  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:14.848686  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:15.348733  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:15.847949  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:16.348332  234625 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:00:16.422308  234625 kubeadm.go:1020] duration metric: took 11.729581193s to wait for elevateKubeSystemPrivileges.
	I0412 20:00:16.422402  234625 kubeadm.go:393] StartCluster complete in 28.073846211s
	I0412 20:00:16.422430  234625 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:00:16.422559  234625 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:00:16.424828  234625 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:00:16.945845  234625 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kindnet-20220412195202-42006" rescaled to 1
	I0412 20:00:16.945920  234625 start.go:208] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:00:16.947880  234625 out.go:176] * Verifying Kubernetes components...
	I0412 20:00:16.947946  234625 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:00:16.945962  234625 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 20:00:16.946039  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:00:16.946209  234625 config.go:178] Loaded profile config "kindnet-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:00:16.948060  234625 addons.go:65] Setting storage-provisioner=true in profile "kindnet-20220412195202-42006"
	I0412 20:00:16.948137  234625 addons.go:153] Setting addon storage-provisioner=true in "kindnet-20220412195202-42006"
	I0412 20:00:16.948148  234625 addons.go:65] Setting default-storageclass=true in profile "kindnet-20220412195202-42006"
	I0412 20:00:16.948171  234625 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-20220412195202-42006"
	W0412 20:00:16.948152  234625 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:00:16.948301  234625 host.go:66] Checking if "kindnet-20220412195202-42006" exists ...
	I0412 20:00:16.948605  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 20:00:16.948824  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 20:00:16.994055  234625 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:00:16.994187  234625 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:00:16.994201  234625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:00:16.994256  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 20:00:16.996443  234625 addons.go:153] Setting addon default-storageclass=true in "kindnet-20220412195202-42006"
	W0412 20:00:16.996486  234625 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:00:16.996527  234625 host.go:66] Checking if "kindnet-20220412195202-42006" exists ...
	I0412 20:00:16.997174  234625 cli_runner.go:164] Run: docker container inspect kindnet-20220412195202-42006 --format={{.State.Status}}
	I0412 20:00:17.030079  234625 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:00:17.031701  234625 node_ready.go:35] waiting up to 5m0s for node "kindnet-20220412195202-42006" to be "Ready" ...
	I0412 20:00:17.035075  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 20:00:17.041458  234625 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:00:17.041486  234625 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:00:17.041543  234625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-20220412195202-42006
	I0412 20:00:17.080438  234625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49382 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/kindnet-20220412195202-42006/id_rsa Username:docker}
	I0412 20:00:17.193530  234625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:00:17.195131  234625 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:00:17.294685  234625 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0412 20:00:17.612049  234625 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 20:00:17.612127  234625 addons.go:417] enableAddons completed in 666.177991ms
	I0412 20:00:19.038275  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:21.038649  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:23.538578  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:26.038437  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:28.538627  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:31.038447  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:33.538307  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:35.538917  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:38.038927  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:40.538521  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:42.540527  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:45.038334  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:47.038391  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:49.538324  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:51.538974  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:54.038323  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:56.038611  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:00:58.539241  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:01.038645  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:03.038739  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:05.039226  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:07.538297  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:09.538495  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:11.538805  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:14.038511  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:16.038744  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:18.539026  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:21.039163  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:23.538809  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:25.538960  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:27.539080  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:30.038178  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:32.038980  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:34.538822  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:37.038790  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:39.538195  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:41.538778  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:44.038722  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:46.539146  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:49.038295  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:51.038758  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:53.039071  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:55.539116  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:01:58.038934  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:00.039044  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:02.539085  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:05.039182  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:07.538476  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:09.538679  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:12.038785  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:14.038818  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:16.538735  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:19.038825  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:21.039094  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:23.538266  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:25.539402  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:28.039025  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:30.538544  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:33.038931  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:35.539236  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:38.038709  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:40.538479  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:42.538730  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:45.037926  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:47.038026  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:49.038335  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:51.038788  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:53.538566  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:56.038393  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:02:58.538627  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:00.539187  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:03.038185  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:05.038613  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:07.038851  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:09.039044  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:11.538897  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:13.539091  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:16.039194  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:18.538658  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:20.539391  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:23.038852  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:25.039228  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:27.538951  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:30.038469  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:32.538601  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:34.538943  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:37.039296  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:39.539011  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:42.038305  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:44.039032  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:46.539125  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:49.038853  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:51.538787  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:54.038644  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:56.039165  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:03:58.538580  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:00.538724  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:03.038374  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:05.039038  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:07.539038  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:10.038544  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:12.538215  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:14.538948  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:17.038294  234625 node_ready.go:58] node "kindnet-20220412195202-42006" has status "Ready":"False"
	I0412 20:04:17.040872  234625 node_ready.go:38] duration metric: took 4m0.009134579s waiting for node "kindnet-20220412195202-42006" to be "Ready" ...
	I0412 20:04:17.043770  234625 out.go:176] 
	W0412 20:04:17.043936  234625 out.go:241] X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:04:17.043956  234625 out.go:241] * 
	* 
	W0412 20:04:17.044709  234625 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:04:17.047478  234625 out.go:176] 

                                                
                                                
** /stderr **
net_test.go:100: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (292.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (295.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220412200421-42006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220412200421-42006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (4m53.509933816s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220412200421-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node old-k8s-version-20220412200421-42006 in cluster old-k8s-version-20220412200421-42006
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 20:04:21.943756  248748 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:04:21.943908  248748 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:04:21.943921  248748 out.go:310] Setting ErrFile to fd 2...
	I0412 20:04:21.943929  248748 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:04:21.944048  248748 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:04:21.944392  248748 out.go:304] Setting JSON to false
	I0412 20:04:21.945926  248748 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10015,"bootTime":1649783847,"procs":640,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:04:21.946006  248748 start.go:125] virtualization: kvm guest
	I0412 20:04:21.948836  248748 out.go:176] * [old-k8s-version-20220412200421-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:04:21.950521  248748 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:04:21.949029  248748 notify.go:193] Checking for updates...
	I0412 20:04:21.952373  248748 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:04:21.953931  248748 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:04:21.955384  248748 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:04:21.957010  248748 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:04:21.957570  248748 config.go:178] Loaded profile config "bridge-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:04:21.957665  248748 config.go:178] Loaded profile config "calico-20220412195203-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:04:21.957760  248748 config.go:178] Loaded profile config "enable-default-cni-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:04:21.957824  248748 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:04:22.005019  248748 docker.go:137] docker version: linux-20.10.14
	I0412 20:04:22.005129  248748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:04:22.107715  248748 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:04:22.037581069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:04:22.107826  248748 docker.go:254] overlay module found
	I0412 20:04:22.110359  248748 out.go:176] * Using the docker driver based on user configuration
	I0412 20:04:22.110392  248748 start.go:284] selected driver: docker
	I0412 20:04:22.110402  248748 start.go:801] validating driver "docker" against <nil>
	I0412 20:04:22.110427  248748 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:04:22.110480  248748 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:04:22.110500  248748 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:04:22.111979  248748 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:04:22.112605  248748 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:04:22.210784  248748 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:04:22.144229146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:04:22.210928  248748 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 20:04:22.211096  248748 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:04:22.213395  248748 out.go:176] * Using Docker driver with the root privilege
	I0412 20:04:22.213427  248748 cni.go:93] Creating CNI manager for ""
	I0412 20:04:22.213435  248748 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:04:22.213445  248748 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:04:22.213450  248748 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:04:22.213456  248748 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0412 20:04:22.213485  248748 start_flags.go:306] config:
	{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:04:22.215382  248748 out.go:176] * Starting control plane node old-k8s-version-20220412200421-42006 in cluster old-k8s-version-20220412200421-42006
	I0412 20:04:22.215417  248748 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:04:22.217004  248748 out.go:176] * Pulling base image ...
	I0412 20:04:22.217041  248748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:04:22.217077  248748 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:04:22.217091  248748 cache.go:57] Caching tarball of preloaded images
	I0412 20:04:22.217131  248748 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:04:22.217381  248748 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:04:22.217402  248748 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0412 20:04:22.217547  248748 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:04:22.217582  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json: {Name:mka0a0f32e5c142080c64f8448dcd65c89408bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:22.262750  248748 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:04:22.262782  248748 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:04:22.262795  248748 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:04:22.262844  248748 start.go:352] acquiring machines lock for old-k8s-version-20220412200421-42006: {Name:mk51335e8aecb7357290fc27d80d48b525f2bff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:04:22.263018  248748 start.go:356] acquired machines lock for "old-k8s-version-20220412200421-42006" in 148.704µs
	I0412 20:04:22.263054  248748 start.go:91] Provisioning new machine with config: &{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:04:22.263134  248748 start.go:131] createHost starting for "" (driver="docker")
	I0412 20:04:22.265546  248748 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0412 20:04:22.265840  248748 start.go:165] libmachine.API.Create for "old-k8s-version-20220412200421-42006" (driver="docker")
	I0412 20:04:22.265889  248748 client.go:168] LocalClient.Create starting
	I0412 20:04:22.265970  248748 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 20:04:22.266004  248748 main.go:134] libmachine: Decoding PEM data...
	I0412 20:04:22.266022  248748 main.go:134] libmachine: Parsing certificate...
	I0412 20:04:22.266084  248748 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 20:04:22.266105  248748 main.go:134] libmachine: Decoding PEM data...
	I0412 20:04:22.266119  248748 main.go:134] libmachine: Parsing certificate...
	I0412 20:04:22.266513  248748 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220412200421-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 20:04:22.301844  248748 cli_runner.go:211] docker network inspect old-k8s-version-20220412200421-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 20:04:22.301926  248748 network_create.go:272] running [docker network inspect old-k8s-version-20220412200421-42006] to gather additional debugging logs...
	I0412 20:04:22.301947  248748 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220412200421-42006
	W0412 20:04:22.335703  248748 cli_runner.go:211] docker network inspect old-k8s-version-20220412200421-42006 returned with exit code 1
	I0412 20:04:22.335746  248748 network_create.go:275] error running [docker network inspect old-k8s-version-20220412200421-42006]: docker network inspect old-k8s-version-20220412200421-42006: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-20220412200421-42006
	I0412 20:04:22.335764  248748 network_create.go:277] output of [docker network inspect old-k8s-version-20220412200421-42006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-20220412200421-42006
	
	** /stderr **
	I0412 20:04:22.335819  248748 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:04:22.369870  248748 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3941532cd703 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:87:d3:29:2b}}
	I0412 20:04:22.370362  248748 network.go:240] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-6a56a3e6bf06 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:9a:ff:38:75}}
	I0412 20:04:22.370957  248748 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000592430] misses:0}
	I0412 20:04:22.370996  248748 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 20:04:22.371015  248748 network_create.go:115] attempt to create docker network old-k8s-version-20220412200421-42006 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0412 20:04:22.371061  248748 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true old-k8s-version-20220412200421-42006
	I0412 20:04:22.444730  248748 network_create.go:99] docker network old-k8s-version-20220412200421-42006 192.168.67.0/24 created
	I0412 20:04:22.444765  248748 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-20220412200421-42006" container
	I0412 20:04:22.444825  248748 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 20:04:22.481891  248748 cli_runner.go:164] Run: docker volume create old-k8s-version-20220412200421-42006 --label name.minikube.sigs.k8s.io=old-k8s-version-20220412200421-42006 --label created_by.minikube.sigs.k8s.io=true
	I0412 20:04:22.519786  248748 oci.go:103] Successfully created a docker volume old-k8s-version-20220412200421-42006
	I0412 20:04:22.519874  248748 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-20220412200421-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220412200421-42006 --entrypoint /usr/bin/test -v old-k8s-version-20220412200421-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 20:04:23.120121  248748 oci.go:107] Successfully prepared a docker volume old-k8s-version-20220412200421-42006
	I0412 20:04:23.120201  248748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:04:23.120236  248748 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 20:04:23.120324  248748 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220412200421-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 20:04:30.135816  248748 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-20220412200421-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.015415991s)
	I0412 20:04:30.135855  248748 kic.go:188] duration metric: took 7.015615 seconds to extract preloaded images to volume
	W0412 20:04:30.135900  248748 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 20:04:30.135915  248748 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 20:04:30.135964  248748 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 20:04:30.236365  248748 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-20220412200421-42006 --name old-k8s-version-20220412200421-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-20220412200421-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-20220412200421-42006 --network old-k8s-version-20220412200421-42006 --ip 192.168.67.2 --volume old-k8s-version-20220412200421-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 20:04:30.664425  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Running}}
	I0412 20:04:30.702821  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:04:30.738943  248748 cli_runner.go:164] Run: docker exec old-k8s-version-20220412200421-42006 stat /var/lib/dpkg/alternatives/iptables
	I0412 20:04:30.809811  248748 oci.go:279] the created container "old-k8s-version-20220412200421-42006" has a running status.
	I0412 20:04:30.809845  248748 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa...
	I0412 20:04:30.859841  248748 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 20:04:30.963327  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:04:31.006781  248748 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 20:04:31.006844  248748 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-20220412200421-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 20:04:31.108329  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:04:31.145249  248748 machine.go:88] provisioning docker machine ...
	I0412 20:04:31.145302  248748 ubuntu.go:169] provisioning hostname "old-k8s-version-20220412200421-42006"
	I0412 20:04:31.145371  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:31.182370  248748 main.go:134] libmachine: Using SSH client type: native
	I0412 20:04:31.182599  248748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0412 20:04:31.182623  248748 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220412200421-42006 && echo "old-k8s-version-20220412200421-42006" | sudo tee /etc/hostname
	I0412 20:04:31.322826  248748 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220412200421-42006
	
	I0412 20:04:31.322918  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:31.356757  248748 main.go:134] libmachine: Using SSH client type: native
	I0412 20:04:31.356944  248748 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49392 <nil> <nil>}
	I0412 20:04:31.356982  248748 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220412200421-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220412200421-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220412200421-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:04:31.476425  248748 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:04:31.476462  248748 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:04:31.476488  248748 ubuntu.go:177] setting up certificates
	I0412 20:04:31.476499  248748 provision.go:83] configureAuth start
	I0412 20:04:31.476554  248748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:04:31.512155  248748 provision.go:138] copyHostCerts
	I0412 20:04:31.512229  248748 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:04:31.512248  248748 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:04:31.512332  248748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:04:31.512455  248748 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:04:31.512481  248748 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:04:31.512518  248748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:04:31.512605  248748 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:04:31.512620  248748 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:04:31.512653  248748 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:04:31.512724  248748 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220412200421-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220412200421-42006]
	I0412 20:04:31.647568  248748 provision.go:172] copyRemoteCerts
	I0412 20:04:31.647640  248748 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:04:31.647675  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:31.684483  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:04:31.776150  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:04:31.796660  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0412 20:04:31.816051  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 20:04:31.835117  248748 provision.go:86] duration metric: configureAuth took 358.602949ms
	I0412 20:04:31.835150  248748 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:04:31.835332  248748 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:04:31.835346  248748 machine.go:91] provisioned docker machine in 690.067414ms
	I0412 20:04:31.835354  248748 client.go:171] LocalClient.Create took 9.569458114s
	I0412 20:04:31.835382  248748 start.go:173] duration metric: libmachine.API.Create for "old-k8s-version-20220412200421-42006" took 9.569546173s
	I0412 20:04:31.835407  248748 start.go:306] post-start starting for "old-k8s-version-20220412200421-42006" (driver="docker")
	I0412 20:04:31.835418  248748 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:04:31.835474  248748 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:04:31.835529  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:31.870260  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:04:31.960385  248748 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:04:31.963414  248748 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:04:31.963441  248748 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:04:31.963455  248748 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:04:31.963462  248748 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:04:31.963472  248748 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:04:31.963523  248748 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:04:31.963586  248748 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:04:31.963660  248748 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:04:31.970966  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:04:31.991208  248748 start.go:309] post-start completed in 155.779419ms
	I0412 20:04:31.991666  248748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:04:32.028118  248748 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:04:32.028465  248748 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:04:32.028641  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:32.063651  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:04:32.153509  248748 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:04:32.158561  248748 start.go:134] duration metric: createHost completed in 9.895410943s
	I0412 20:04:32.158596  248748 start.go:81] releasing machines lock for "old-k8s-version-20220412200421-42006", held for 9.895561277s
	I0412 20:04:32.158673  248748 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:04:32.194400  248748 ssh_runner.go:195] Run: systemctl --version
	I0412 20:04:32.194466  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:32.194487  248748 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:04:32.194567  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:04:32.230042  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:04:32.231825  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:04:32.335506  248748 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:04:32.347009  248748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:04:32.357169  248748 docker.go:183] disabling docker service ...
	I0412 20:04:32.357219  248748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:04:32.374701  248748 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:04:32.385338  248748 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:04:32.466798  248748 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:04:32.551221  248748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:04:32.562195  248748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:04:32.576724  248748 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:04:32.591566  248748 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:04:32.599176  248748 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:04:32.607036  248748 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:04:32.687549  248748 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:04:32.756733  248748 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:04:32.756794  248748 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:04:32.760824  248748 start.go:462] Will wait 60s for crictl version
	I0412 20:04:32.760898  248748 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:04:32.786305  248748 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:04:32Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:04:43.834416  248748 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:04:43.859367  248748 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:04:43.859457  248748 ssh_runner.go:195] Run: containerd --version
	I0412 20:04:43.881703  248748 ssh_runner.go:195] Run: containerd --version
	I0412 20:04:43.907427  248748 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0412 20:04:43.907521  248748 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220412200421-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:04:43.948058  248748 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0412 20:04:43.951711  248748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:04:43.966147  248748 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:04:43.966227  248748 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:04:43.966298  248748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:04:43.993661  248748 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:04:43.993693  248748 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:04:43.993751  248748 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:04:44.023034  248748 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:04:44.023061  248748 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:04:44.023104  248748 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:04:44.050728  248748 cni.go:93] Creating CNI manager for ""
	I0412 20:04:44.050760  248748 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:04:44.050773  248748 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:04:44.050787  248748 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220412200421-42006 NodeName:old-k8s-version-20220412200421-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:04:44.050910  248748 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220412200421-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220412200421-42006
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:04:44.050988  248748 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220412200421-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:04:44.051035  248748 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0412 20:04:44.058922  248748 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:04:44.058996  248748 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:04:44.066670  248748 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0412 20:04:44.082309  248748 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:04:44.098128  248748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0412 20:04:44.113832  248748 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:04:44.117135  248748 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:04:44.127390  248748 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006 for IP: 192.168.67.2
	I0412 20:04:44.127508  248748 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:04:44.127555  248748 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:04:44.127601  248748 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.key
	I0412 20:04:44.127618  248748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.crt with IP's: []
	I0412 20:04:44.411245  248748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.crt ...
	I0412 20:04:44.411282  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.crt: {Name:mk377b219ea0893011dbd21c1683297f9e8ed6c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:44.411499  248748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.key ...
	I0412 20:04:44.411513  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.key: {Name:mk93aff8adb28739fcfe1441254b46ffbebb071f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:44.411601  248748 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e
	I0412 20:04:44.411620  248748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 20:04:44.707264  248748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt.c7fa3a9e ...
	I0412 20:04:44.707300  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt.c7fa3a9e: {Name:mk0b4cfef05cca4d3903d0ae89a3e99966d37289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:44.707511  248748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e ...
	I0412 20:04:44.707525  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e: {Name:mk63fcf9db42e2f7b0d0bb28456a14d066d54a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:44.707612  248748 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt
	I0412 20:04:44.707670  248748 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key
	I0412 20:04:44.707722  248748 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key
	I0412 20:04:44.707737  248748 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt with IP's: []
	I0412 20:04:44.925274  248748 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt ...
	I0412 20:04:44.925319  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt: {Name:mk26dc99f6d209cf66456b19f0fb6788ae9278f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:44.925526  248748 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key ...
	I0412 20:04:44.925540  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key: {Name:mk15cc72b60ca5d7a6edd5bd8a1ef32f3cfa1f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:04:44.925703  248748 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:04:44.925741  248748 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:04:44.925753  248748 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:04:44.925778  248748 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:04:44.925801  248748 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:04:44.925838  248748 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:04:44.925878  248748 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:04:44.926430  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:04:44.945785  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:04:44.964881  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:04:44.983694  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:04:45.003465  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:04:45.022447  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:04:45.041588  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:04:45.061359  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:04:45.080712  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:04:45.101410  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:04:45.121509  248748 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:04:45.140207  248748 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:04:45.153466  248748 ssh_runner.go:195] Run: openssl version
	I0412 20:04:45.159252  248748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:04:45.167403  248748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:04:45.170830  248748 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:04:45.170897  248748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:04:45.176215  248748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:04:45.184606  248748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:04:45.193543  248748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:04:45.197306  248748 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:04:45.197370  248748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:04:45.202877  248748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:04:45.211343  248748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:04:45.219254  248748 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:04:45.222995  248748 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:04:45.223056  248748 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:04:45.228984  248748 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:04:45.237783  248748 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:04:45.237892  248748 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:04:45.237937  248748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:04:45.265876  248748 cri.go:87] found id: ""
	I0412 20:04:45.265949  248748 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:04:45.273688  248748 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:04:45.281838  248748 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:04:45.281913  248748 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:04:45.290337  248748 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:04:45.290404  248748 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:04:45.682996  248748 out.go:203]   - Generating certificates and keys ...
	I0412 20:04:48.106521  248748 out.go:203]   - Booting up control plane ...
	I0412 20:04:58.657252  248748 out.go:203]   - Configuring RBAC rules ...
	I0412 20:04:59.081556  248748 cni.go:93] Creating CNI manager for ""
	I0412 20:04:59.081591  248748 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:04:59.083800  248748 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:04:59.083881  248748 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:04:59.088149  248748 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:04:59.088177  248748 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:04:59.104408  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:04:59.443235  248748 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:04:59.443343  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=old-k8s-version-20220412200421-42006 minikube.k8s.io/updated_at=2022_04_12T20_04_59_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:04:59.443352  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:04:59.539719  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:04:59.539787  248748 ops.go:34] apiserver oom_adj: -16
	I0412 20:05:00.127170  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:00.627344  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:01.126926  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:01.627274  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:02.126692  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:02.627191  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:03.126935  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:03.628877  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:04.127500  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:04.627328  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:05.126933  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:05.627393  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:06.126608  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:06.626806  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:07.127446  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:07.626740  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:08.127430  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:08.627457  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:09.127306  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:09.627534  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:10.126681  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:10.626598  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:11.127164  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:11.627295  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:12.126522  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:12.627396  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:13.127235  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:13.626641  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:14.127314  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:14.627371  248748 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:14.718629  248748 kubeadm.go:1020] duration metric: took 15.275348369s to wait for elevateKubeSystemPrivileges.
	I0412 20:05:14.718668  248748 kubeadm.go:393] StartCluster complete in 29.480897955s
	I0412 20:05:14.718692  248748 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:14.718824  248748 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:05:14.720388  248748 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:15.241568  248748 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220412200421-42006" rescaled to 1
	I0412 20:05:15.241644  248748 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:05:15.241701  248748 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 20:05:15.241775  248748 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:05:15.241797  248748 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220412200421-42006"
	W0412 20:05:15.241805  248748 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:05:15.241852  248748 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:05:15.241685  248748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:05:15.241914  248748 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:05:15.246135  248748 out.go:176] * Verifying Kubernetes components...
	I0412 20:05:15.246216  248748 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:05:15.241999  248748 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:05:15.242419  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:05:15.246290  248748 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220412200421-42006"
	I0412 20:05:15.246699  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:05:15.301662  248748 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:05:15.301834  248748 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:05:15.301855  248748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:05:15.301953  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:05:15.304092  248748 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220412200421-42006"
	W0412 20:05:15.304121  248748 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:05:15.304158  248748 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:05:15.304707  248748 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:05:15.354233  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:05:15.358365  248748 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:05:15.358396  248748 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:05:15.358459  248748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:05:15.369291  248748 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:05:15.370383  248748 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:05:15.406487  248748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49392 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:05:15.487811  248748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:05:15.607427  248748 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:05:15.728537  248748 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0412 20:05:16.186137  248748 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 20:05:16.186177  248748 addons.go:417] enableAddons completed in 944.485335ms
	I0412 20:05:17.377151  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:19.876966  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:22.377720  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:24.877720  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:27.377151  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:29.377203  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:31.377505  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:33.877407  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:36.377594  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:38.877096  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:41.377122  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:43.876528  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:45.877546  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:48.376870  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:50.377796  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:52.877440  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:55.376760  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:57.377228  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:05:59.377356  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:01.876847  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:04.376710  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:06.376901  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:08.876871  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:10.877547  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:13.376799  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:15.377186  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:17.377369  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:19.877160  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:22.376782  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:24.376819  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:26.376974  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:28.377590  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:30.876857  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:33.377098  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:35.377384  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:37.877798  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:40.377081  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:42.877160  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:45.376791  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:47.377539  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:49.876781  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:51.877708  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:54.376925  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:56.377007  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:58.377120  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:00.876752  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:02.877545  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:05.377276  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:07.876894  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:09.877235  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:12.377352  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:14.876965  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:17.377125  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:19.377352  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:21.877681  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:24.376731  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:26.377297  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:28.377600  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:30.876826  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:32.876932  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:34.877535  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:37.377700  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:39.876618  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:41.877026  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:43.877127  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:45.877640  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:48.377421  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:50.876981  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:52.877111  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:55.377129  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:57.876917  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:59.876969  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:02.377072  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:04.377447  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:06.377927  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:08.876715  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:10.876992  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:12.877639  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:15.377865  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:17.877000  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:19.877332  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:21.877545  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:24.377891  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:26.876684  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:28.877006  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:30.877641  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:33.377445  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:35.876596  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:37.877405  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:40.377447  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:42.876734  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:44.877017  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:47.376581  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:49.377052  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:51.377414  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:53.877648  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:56.376551  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:58.376693  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:00.377390  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:02.876643  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:04.877491  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:07.376877  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:09.377403  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:11.876753  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:13.877655  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:15.379867  248748 node_ready.go:38] duration metric: took 4m0.009449893s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:09:15.382455  248748 out.go:176] 
	W0412 20:09:15.382637  248748 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:09:15.382653  248748 out.go:241] * 
	* 
	W0412 20:09:15.383376  248748 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:09:15.384634  248748 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:172: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-20220412200421-42006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220412200421-42006
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220412200421-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42",
	        "Created": "2022-04-12T20:04:30.270409412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249540,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:04:30.654643592Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42-json.log",
	        "Name": "/old-k8s-version-20220412200421-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220412200421-42006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220412200421-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220412200421-42006",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220412200421-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220412200421-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ad84289742f0dfbd44646dfe51c90a2743ffb78bf6626291683c05a3d95eee0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49392"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49389"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ad84289742f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220412200421-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a5e4ff2bbf6e",
	                        "old-k8s-version-20220412200421-42006"
	                    ],
	                    "NetworkID": "0b96a6a249d72d5fff5d5b9db029edbfc6a07a56e8064108c65000591927cbc6",
	                    "EndpointID": "c3007d28c5878ca69ad88197e01438f31f4f4f7d8152c555a927532e6a59c8f3",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:54:42 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
	|         | custom-weave-20220412195203-42006                 |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=testdata/weavenet.yaml                      |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p                                                | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:57 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
	|         | custom-weave-20220412195203-42006                 |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| start   | -p                                                | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:03 UTC | Tue, 12 Apr 2022 19:56:06 UTC |
	|         | cert-expiration-20220412195203-42006              |                                         |         |         |                               |                               |
	|         | --memory=2048 --cert-expiration=3m                |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| delete  | -p                                                | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:56:06 UTC | Tue, 12 Apr 2022 19:56:09 UTC |
	|         | custom-weave-20220412195203-42006                 |                                         |         |         |                               |                               |
	| start   | -p cilium-20220412195203-42006                    | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:47 UTC | Tue, 12 Apr 2022 19:57:10 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                      |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p cilium-20220412195203-42006                    | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:15 UTC | Tue, 12 Apr 2022 19:57:15 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| delete  | -p cilium-20220412195203-42006                    | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:26 UTC | Tue, 12 Apr 2022 19:57:29 UTC |
	| start   | -p                                                | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:29 UTC | Tue, 12 Apr 2022 19:58:30 UTC |
	|         | enable-default-cni-20220412195202-42006           |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --enable-default-cni=true                         |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p                                                | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:58:31 UTC | Tue, 12 Apr 2022 19:58:31 UTC |
	|         | enable-default-cni-20220412195202-42006           |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| start   | -p                                                | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:06 UTC | Tue, 12 Apr 2022 19:59:21 UTC |
	|         | cert-expiration-20220412195203-42006              |                                         |         |         |                               |                               |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --cert-expiration=8760h                           |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| delete  | -p                                                | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:21 UTC | Tue, 12 Apr 2022 19:59:24 UTC |
	|         | cert-expiration-20220412195203-42006              |                                         |         |         |                               |                               |
	| -p      | pause-20220412195428-42006                        | pause-20220412195428-42006              | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:37 UTC | Tue, 12 Apr 2022 20:02:38 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p pause-20220412195428-42006                     | pause-20220412195428-42006              | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:39 UTC | Tue, 12 Apr 2022 20:02:42 UTC |
	| -p      | kindnet-20220412195202-42006                      | kindnet-20220412195202-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:17 UTC | Tue, 12 Apr 2022 20:04:18 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p                                                | kindnet-20220412195202-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:19 UTC | Tue, 12 Apr 2022 20:04:21 UTC |
	|         | kindnet-20220412195202-42006                      |                                         |         |         |                               |                               |
	| -p      | enable-default-cni-20220412195202-42006           | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:48 UTC | Tue, 12 Apr 2022 20:04:49 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p                                                | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:50 UTC | Tue, 12 Apr 2022 20:04:53 UTC |
	|         | enable-default-cni-20220412195202-42006           |                                         |         |         |                               |                               |
	| -p      | calico-20220412195203-42006                       | calico-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:05:03 UTC | Tue, 12 Apr 2022 20:05:05 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p calico-20220412195203-42006                    | calico-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:05:05 UTC | Tue, 12 Apr 2022 20:05:10 UTC |
	| start   | -p                                                | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:53 UTC | Tue, 12 Apr 2022 20:06:07 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                 |                                         |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:16 UTC | Tue, 12 Apr 2022 20:06:17 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:17 UTC | Tue, 12 Apr 2022 20:06:37 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:37 UTC | Tue, 12 Apr 2022 20:06:38 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                               |                               |
	| start   | -p bridge-20220412195202-42006                    | bridge-20220412195202-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:42 UTC | Tue, 12 Apr 2022 20:07:57 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                      |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p bridge-20220412195202-42006                    | bridge-20220412195202-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:07:57 UTC | Tue, 12 Apr 2022 20:07:58 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:06:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:06:38.070775  262043 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:06:38.070924  262043 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:06:38.070934  262043 out.go:310] Setting ErrFile to fd 2...
	I0412 20:06:38.070939  262043 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:06:38.071052  262043 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:06:38.071305  262043 out.go:304] Setting JSON to false
	I0412 20:06:38.072898  262043 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10151,"bootTime":1649783847,"procs":578,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:06:38.072978  262043 start.go:125] virtualization: kvm guest
	I0412 20:06:38.076134  262043 out.go:176] * [no-preload-20220412200453-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:06:38.078061  262043 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:06:38.076319  262043 notify.go:193] Checking for updates...
	I0412 20:06:38.079814  262043 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:06:38.081760  262043 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:06:38.083632  262043 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:06:38.085370  262043 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:06:38.085992  262043 config.go:178] Loaded profile config "no-preload-20220412200453-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:06:38.086634  262043 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:06:38.132805  262043 docker.go:137] docker version: linux-20.10.14
	I0412 20:06:38.132930  262043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:06:38.235912  262043 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:06:38.16523747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:06:38.236034  262043 docker.go:254] overlay module found
	I0412 20:06:38.238794  262043 out.go:176] * Using the docker driver based on existing profile
	I0412 20:06:38.238830  262043 start.go:284] selected driver: docker
	I0412 20:06:38.238836  262043 start.go:801] validating driver "docker" against &{Name:no-preload-20220412200453-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true s
ystem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:06:38.238961  262043 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:06:38.239009  262043 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:06:38.239032  262043 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:06:38.240836  262043 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:06:38.241472  262043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:06:38.341391  262043 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:06:38.273881484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:06:38.341566  262043 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:06:38.341672  262043 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:06:38.344860  262043 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:06:38.345002  262043 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:06:38.345035  262043 cni.go:93] Creating CNI manager for ""
	I0412 20:06:38.345045  262043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:06:38.345072  262043 start_flags.go:306] config:
	{Name:no-preload-20220412200453-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:06:38.347242  262043 out.go:176] * Starting control plane node no-preload-20220412200453-42006 in cluster no-preload-20220412200453-42006
	I0412 20:06:38.347275  262043 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:06:38.348898  262043 out.go:176] * Pulling base image ...
	I0412 20:06:38.348934  262043 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:06:38.348973  262043 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:06:38.349104  262043 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/config.json ...
	I0412 20:06:38.349253  262043 cache.go:107] acquiring lock: {Name:mk62ec854ac97fe36974639873696d539b0701d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349259  262043 cache.go:107] acquiring lock: {Name:mk2bda950897038ca1478b3a7163d8ac0f3417b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349371  262043 cache.go:107] acquiring lock: {Name:mkf0415b3ed7938a96d14f1e7cce50737ac15575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349386  262043 cache.go:107] acquiring lock: {Name:mk6dc1ee3b9a5f568e0933515ea79a17a4e49320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349405  262043 cache.go:107] acquiring lock: {Name:mk5210dd2f9d4dcb1bae57090039fdcf65f204ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349414  262043 cache.go:107] acquiring lock: {Name:mkb4e117321415b81dd2df649b67db215b4b34e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349462  262043 cache.go:107] acquiring lock: {Name:mke367e34b80546a2c751cf2682a4715709b415f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349519  262043 cache.go:107] acquiring lock: {Name:mk4b40f363fb59846cd134c4150ff1979bf7055a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349588  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0412 20:06:38.349601  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0412 20:06:38.349612  262043 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 371.282µs
	I0412 20:06:38.349618  262043 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 311.517µs
	I0412 20:06:38.349630  262043 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0412 20:06:38.349626  262043 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0412 20:06:38.349642  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6-rc.0 exists
	I0412 20:06:38.349651  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6-rc.0 exists
	I0412 20:06:38.349656  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6-rc.0 exists
	I0412 20:06:38.349672  262043 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6-rc.0" took 290.932µs
	I0412 20:06:38.349687  262043 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6-rc.0" took 321.114µs
	I0412 20:06:38.349688  262043 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6-rc.0" took 277.58µs
	I0412 20:06:38.349695  262043 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349702  262043 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349704  262043 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349731  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0412 20:06:38.349747  262043 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 400.509µs
	I0412 20:06:38.349752  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6-rc.0 exists
	I0412 20:06:38.349776  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0412 20:06:38.349782  262043 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6-rc.0" took 545.75µs
	I0412 20:06:38.349786  262043 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 478.693µs
	I0412 20:06:38.349792  262043 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349795  262043 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0412 20:06:38.349762  262043 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0412 20:06:38.349818  262043 cache.go:87] Successfully saved all images to host disk.
	I0412 20:06:38.397946  262043 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:06:38.397982  262043 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:06:38.397999  262043 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:06:38.398046  262043 start.go:352] acquiring machines lock for no-preload-20220412200453-42006: {Name:mk5e55d06e0b09ff05f6bc84f5bd170846683246 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.398148  262043 start.go:356] acquired machines lock for "no-preload-20220412200453-42006" in 81.316µs
	I0412 20:06:38.398172  262043 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:06:38.398178  262043 fix.go:55] fixHost starting: 
	I0412 20:06:38.398439  262043 cli_runner.go:164] Run: docker container inspect no-preload-20220412200453-42006 --format={{.State.Status}}
	I0412 20:06:38.434741  262043 fix.go:103] recreateIfNeeded on no-preload-20220412200453-42006: state=Stopped err=<nil>
	W0412 20:06:38.434785  262043 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:06:35.616400  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:37.616983  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:40.116038  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:37.877798  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:40.377081  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:38.602879  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:41.103081  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:38.438292  262043 out.go:176] * Restarting existing docker container for "no-preload-20220412200453-42006" ...
	I0412 20:06:38.438387  262043 cli_runner.go:164] Run: docker start no-preload-20220412200453-42006
	I0412 20:06:38.850014  262043 cli_runner.go:164] Run: docker container inspect no-preload-20220412200453-42006 --format={{.State.Status}}
	I0412 20:06:38.886204  262043 kic.go:416] container "no-preload-20220412200453-42006" state is running.
	I0412 20:06:38.886611  262043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220412200453-42006
	I0412 20:06:38.923268  262043 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/config.json ...
	I0412 20:06:38.923827  262043 machine.go:88] provisioning docker machine ...
	I0412 20:06:38.923885  262043 ubuntu.go:169] provisioning hostname "no-preload-20220412200453-42006"
	I0412 20:06:38.923971  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:38.962118  262043 main.go:134] libmachine: Using SSH client type: native
	I0412 20:06:38.962338  262043 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49407 <nil> <nil>}
	I0412 20:06:38.962366  262043 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220412200453-42006 && echo "no-preload-20220412200453-42006" | sudo tee /etc/hostname
	I0412 20:06:38.963022  262043 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58218->127.0.0.1:49407: read: connection reset by peer
	I0412 20:06:42.098680  262043 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220412200453-42006
	
	I0412 20:06:42.098774  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.136233  262043 main.go:134] libmachine: Using SSH client type: native
	I0412 20:06:42.136411  262043 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49407 <nil> <nil>}
	I0412 20:06:42.136446  262043 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220412200453-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220412200453-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220412200453-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:06:42.256294  262043 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:06:42.256326  262043 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:06:42.256353  262043 ubuntu.go:177] setting up certificates
	I0412 20:06:42.256366  262043 provision.go:83] configureAuth start
	I0412 20:06:42.256422  262043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220412200453-42006
	I0412 20:06:42.292692  262043 provision.go:138] copyHostCerts
	I0412 20:06:42.292765  262043 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:06:42.292779  262043 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:06:42.292851  262043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:06:42.292945  262043 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:06:42.292956  262043 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:06:42.292982  262043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:06:42.293044  262043 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:06:42.293052  262043 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:06:42.293073  262043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:06:42.293136  262043 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220412200453-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220412200453-42006]
	I0412 20:06:42.358423  262043 provision.go:172] copyRemoteCerts
	I0412 20:06:42.358486  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:06:42.358525  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.395317  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.484628  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:06:42.504389  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0412 20:06:42.522915  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:06:42.542117  262043 provision.go:86] duration metric: configureAuth took 285.73544ms
	I0412 20:06:42.542154  262043 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:06:42.542377  262043 config.go:178] Loaded profile config "no-preload-20220412200453-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:06:42.542394  262043 machine.go:91] provisioned docker machine in 3.618527106s
	I0412 20:06:42.542402  262043 start.go:306] post-start starting for "no-preload-20220412200453-42006" (driver="docker")
	I0412 20:06:42.542415  262043 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:06:42.542453  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:06:42.542495  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.578654  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.667884  262043 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:06:42.670582  262043 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:06:42.670604  262043 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:06:42.670613  262043 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:06:42.670620  262043 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:06:42.670631  262043 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:06:42.670678  262043 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:06:42.670745  262043 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:06:42.670826  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:06:42.679250  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:06:42.698560  262043 start.go:309] post-start completed in 156.135756ms
	I0412 20:06:42.698634  262043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:06:42.698705  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.735208  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.820837  262043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:06:42.824992  262043 fix.go:57] fixHost completed within 4.426806364s
	I0412 20:06:42.825026  262043 start.go:81] releasing machines lock for "no-preload-20220412200453-42006", held for 4.42686368s
	I0412 20:06:42.825125  262043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220412200453-42006
	I0412 20:06:42.860349  262043 ssh_runner.go:195] Run: systemctl --version
	I0412 20:06:42.860405  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.860425  262043 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:06:42.860497  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.899252  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.899693  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:43.008117  262043 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:06:43.021129  262043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:06:43.031539  262043 docker.go:183] disabling docker service ...
	I0412 20:06:43.031610  262043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:06:43.042865  262043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:06:43.052974  262043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:06:42.116983  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:44.616790  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:42.877160  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:45.376791  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:43.601289  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:45.601448  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:43.136830  262043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:06:43.212117  262043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:06:43.222157  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:06:43.235875  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:06:43.250199  262043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:06:43.257113  262043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:06:43.263893  262043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:06:43.341312  262043 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:06:43.418167  262043 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:06:43.418236  262043 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:06:43.422203  262043 start.go:462] Will wait 60s for crictl version
	I0412 20:06:43.422257  262043 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:06:43.450330  262043 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:06:43Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:06:47.116435  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:49.117177  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:47.377539  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:49.876781  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:51.877708  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:48.101289  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:50.101523  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:52.101737  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:54.498500  262043 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:06:54.523682  262043 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:06:54.523744  262043 ssh_runner.go:195] Run: containerd --version
	I0412 20:06:54.544706  262043 ssh_runner.go:195] Run: containerd --version
	I0412 20:06:54.569217  262043 out.go:176] * Preparing Kubernetes v1.23.6-rc.0 on containerd 1.5.10 ...
	I0412 20:06:54.569293  262043 cli_runner.go:164] Run: docker network inspect no-preload-20220412200453-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:06:54.608131  262043 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:06:54.611871  262043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:06:51.118411  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:53.616135  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:54.376925  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:56.377007  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:54.101816  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:56.601365  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:54.624326  262043 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:06:54.624409  262043 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:06:54.624470  262043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:06:54.650522  262043 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:06:54.650556  262043 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:06:54.650602  262043 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:06:54.676728  262043 cni.go:93] Creating CNI manager for ""
	I0412 20:06:54.676761  262043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:06:54.676777  262043 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:06:54.676797  262043 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220412200453-42006 NodeName:no-preload-20220412200453-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:06:54.676953  262043 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220412200453-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:06:54.677056  262043 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220412200453-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:06:54.677120  262043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6-rc.0
	I0412 20:06:54.685668  262043 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:06:54.685760  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:06:54.693734  262043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0412 20:06:54.708454  262043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0412 20:06:54.722148  262043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2065 bytes)
	I0412 20:06:54.735859  262043 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:06:54.738901  262043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:06:54.748842  262043 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006 for IP: 192.168.49.2
	I0412 20:06:54.748963  262043 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:06:54.749000  262043 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:06:54.749075  262043 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.key
	I0412 20:06:54.749132  262043 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/apiserver.key.dd3b5fb2
	I0412 20:06:54.749166  262043 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/proxy-client.key
	I0412 20:06:54.749256  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:06:54.749286  262043 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:06:54.749298  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:06:54.749321  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:06:54.749354  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:06:54.749382  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:06:54.749425  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:06:54.750018  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:06:54.769182  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0412 20:06:54.789702  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:06:54.809714  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:06:54.828243  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:06:54.846446  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:06:54.865190  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:06:54.885291  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:06:54.905894  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:06:54.926143  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:06:54.945078  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:06:54.963695  262043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:06:54.977881  262043 ssh_runner.go:195] Run: openssl version
	I0412 20:06:54.983989  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:06:54.993645  262043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:06:54.997307  262043 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:06:54.997360  262043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:06:55.002865  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:06:55.011027  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:06:55.019101  262043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:06:55.022548  262043 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:06:55.022605  262043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:06:55.027910  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:06:55.035322  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:06:55.043746  262043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:06:55.047043  262043 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:06:55.047115  262043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:06:55.052304  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:06:55.059836  262043 kubeadm.go:391] StartCluster: {Name:no-preload-20220412200453-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:06:55.059954  262043 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:06:55.059998  262043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:06:55.087591  262043 cri.go:87] found id: "902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b"
	I0412 20:06:55.087619  262043 cri.go:87] found id: "663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e"
	I0412 20:06:55.087626  262043 cri.go:87] found id: "6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a"
	I0412 20:06:55.087632  262043 cri.go:87] found id: "359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f"
	I0412 20:06:55.087638  262043 cri.go:87] found id: "be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454"
	I0412 20:06:55.087644  262043 cri.go:87] found id: "9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159"
	I0412 20:06:55.087650  262043 cri.go:87] found id: "7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50"
	I0412 20:06:55.087655  262043 cri.go:87] found id: "28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144"
	I0412 20:06:55.087661  262043 cri.go:87] found id: ""
	I0412 20:06:55.087701  262043 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:06:55.104645  262043 cri.go:114] JSON = null
	W0412 20:06:55.104706  262043 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0412 20:06:55.104768  262043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:06:55.113235  262043 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:06:55.113267  262043 kubeadm.go:601] restartCluster start
	I0412 20:06:55.113330  262043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:06:55.121394  262043 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.122364  262043 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220412200453-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:06:55.122978  262043 kubeconfig.go:127] "no-preload-20220412200453-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:06:55.123815  262043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:06:55.125557  262043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:06:55.132989  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.133058  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.141992  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.342388  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.342462  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.351633  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.542896  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.542982  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.552707  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.742924  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.743049  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.752665  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.942906  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.943016  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.952455  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.142683  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.142778  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.152261  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.342577  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.342673  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.352902  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.542076  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.542180  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.551444  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.742688  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.742769  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.752796  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.942936  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.943045  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.952305  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.142573  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.142664  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.151862  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.342121  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.342210  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.351557  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.542857  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.542941  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.552265  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.742615  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.742695  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.752050  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.942284  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.942397  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.951581  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.616256  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:57.616496  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:59.616535  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:58.377120  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:00.876752  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:59.101887  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:01.601425  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:58.142257  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:58.142347  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:58.151550  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.151581  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:58.151623  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:58.159939  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.159969  262043 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:06:58.159976  262043 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:06:58.159990  262043 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:06:58.160053  262043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:06:58.188974  262043 cri.go:87] found id: "902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b"
	I0412 20:06:58.189006  262043 cri.go:87] found id: "663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e"
	I0412 20:06:58.189015  262043 cri.go:87] found id: "6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a"
	I0412 20:06:58.189027  262043 cri.go:87] found id: "359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f"
	I0412 20:06:58.189036  262043 cri.go:87] found id: "be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454"
	I0412 20:06:58.189046  262043 cri.go:87] found id: "9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159"
	I0412 20:06:58.189062  262043 cri.go:87] found id: "7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50"
	I0412 20:06:58.189077  262043 cri.go:87] found id: "28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144"
	I0412 20:06:58.189091  262043 cri.go:87] found id: ""
	I0412 20:06:58.189105  262043 cri.go:232] Stopping containers: [902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b 663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e 6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a 359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454 9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159 7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50 28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144]
	I0412 20:06:58.189170  262043 ssh_runner.go:195] Run: which crictl
	I0412 20:06:58.192496  262043 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b 663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e 6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a 359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454 9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159 7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50 28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144
	I0412 20:06:58.221614  262043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:06:58.233286  262043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:06:58.241242  262043 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Apr 12 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Apr 12 20:05 /etc/kubernetes/scheduler.conf
	
	I0412 20:06:58.241317  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:06:58.248808  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:06:58.256355  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:06:58.263501  262043 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.263579  262043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:06:58.270559  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:06:58.277975  262043 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.278046  262043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:06:58.285321  262043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:06:58.294292  262043 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:06:58.294326  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:58.338348  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:58.973712  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:59.119303  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:59.167022  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:59.220626  262043 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:06:59.220700  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:06:59.730082  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:00.229868  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:00.729838  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:01.230186  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:01.729607  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:02.230239  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:02.730366  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:01.616795  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:04.116517  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:02.877545  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:05.377276  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:04.102441  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:06.102522  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:03.230282  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:03.730277  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:04.229692  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:04.730521  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:05.230320  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:05.295798  262043 api_server.go:71] duration metric: took 6.075172654s to wait for apiserver process to appear ...
	I0412 20:07:05.295834  262043 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:07:05.295848  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:05.296366  262043 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0412 20:07:05.797121  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:08.128253  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:07:08.128295  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:07:08.296479  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:08.303675  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:07:08.303712  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:07:08.797254  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:08.802116  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:07:08.802146  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:07:09.296686  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:09.301615  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:07:09.301650  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:07:09.797301  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:09.803564  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0412 20:07:09.811289  262043 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:07:09.811325  262043 api_server.go:130] duration metric: took 4.515484491s to wait for apiserver health ...
	I0412 20:07:09.811339  262043 cni.go:93] Creating CNI manager for ""
	I0412 20:07:09.811347  262043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:07:06.116969  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:08.117491  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:07.876894  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:09.877235  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:08.601586  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:10.602554  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:09.814030  262043 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:07:09.814109  262043 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:07:09.818353  262043 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl ...
	I0412 20:07:09.818376  262043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:07:09.861903  262043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:07:11.048423  262043 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.186475348s)
	I0412 20:07:11.048465  262043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:07:11.055790  262043 system_pods.go:59] 9 kube-system pods found
	I0412 20:07:11.055822  262043 system_pods.go:61] "coredns-64897985d-7fs64" [12c651ff-9508-4a46-9c6f-3bf20b59dfae] Running
	I0412 20:07:11.055830  262043 system_pods.go:61] "etcd-no-preload-20220412200453-42006" [bdfa6f43-91b7-40d0-9c3f-7684ad85c38e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:07:11.055838  262043 system_pods.go:61] "kindnet-rv4qh" [db399dcc-0c32-427a-b14a-9653948e580d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:07:11.055843  262043 system_pods.go:61] "kube-apiserver-no-preload-20220412200453-42006" [b1a9cfa6-973a-43f9-9bb4-c4db4b40367f] Running
	I0412 20:07:11.055850  262043 system_pods.go:61] "kube-controller-manager-no-preload-20220412200453-42006" [2cf35c18-75b1-4645-af1e-dbc8d5e55b73] Running
	I0412 20:07:11.055854  262043 system_pods.go:61] "kube-proxy-tctg4" [caa02c16-d30f-48d0-b131-20d3bab70353] Running
	I0412 20:07:11.055858  262043 system_pods.go:61] "kube-scheduler-no-preload-20220412200453-42006" [c3aec238-45e4-4049-876c-f271b9977d2a] Running
	I0412 20:07:11.055865  262043 system_pods.go:61] "metrics-server-b955d9d8-2chfs" [6327233c-6326-4459-b2e2-7ec9aa727186] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0412 20:07:11.055872  262043 system_pods.go:61] "storage-provisioner" [d44a5e95-5510-4f04-b075-c910ed6f1b80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0412 20:07:11.055877  262043 system_pods.go:74] duration metric: took 7.40521ms to wait for pod list to return data ...
	I0412 20:07:11.055885  262043 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:07:11.058382  262043 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:07:11.058412  262043 node_conditions.go:123] node cpu capacity is 8
	I0412 20:07:11.058424  262043 node_conditions.go:105] duration metric: took 2.527202ms to run NodePressure ...
	I0412 20:07:11.058442  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:07:11.202271  262043 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:07:11.207435  262043 kubeadm.go:752] kubelet initialised
	I0412 20:07:11.207511  262043 kubeadm.go:753] duration metric: took 5.204568ms waiting for restarted kubelet to initialise ...
	I0412 20:07:11.207530  262043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:07:11.231634  262043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-7fs64" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:11.237543  262043 pod_ready.go:92] pod "coredns-64897985d-7fs64" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:11.237570  262043 pod_ready.go:81] duration metric: took 5.894944ms waiting for pod "coredns-64897985d-7fs64" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:11.237582  262043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:10.617366  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:13.116482  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:12.377352  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:14.876965  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:13.100979  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:15.101733  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:17.101798  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:13.288105  262043 pod_ready.go:102] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:15.288923  262043 pod_ready.go:102] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:17.790592  262043 pod_ready.go:102] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:15.616860  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:17.616909  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:20.116362  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:17.377125  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:19.377352  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:21.877681  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:19.102034  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:21.601067  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:18.788512  262043 pod_ready.go:92] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:18.788541  262043 pod_ready.go:81] duration metric: took 7.550951484s waiting for pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:18.788554  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:19.300834  262043 pod_ready.go:92] pod "kube-apiserver-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:19.300866  262043 pod_ready.go:81] duration metric: took 512.302546ms waiting for pod "kube-apiserver-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:19.300892  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.813288  262043 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:20.813330  262043 pod_ready.go:81] duration metric: took 1.512427511s waiting for pod "kube-controller-manager-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.813345  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tctg4" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.818389  262043 pod_ready.go:92] pod "kube-proxy-tctg4" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:20.818418  262043 pod_ready.go:81] duration metric: took 5.063428ms waiting for pod "kube-proxy-tctg4" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.818430  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:22.828651  262043 pod_ready.go:102] pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:22.116777  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:24.117192  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:24.376731  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:26.377297  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:23.601712  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:26.101443  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:24.329319  262043 pod_ready.go:92] pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:24.329352  262043 pod_ready.go:81] duration metric: took 3.510912342s waiting for pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:24.329367  262043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:26.343964  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:26.616625  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:29.116436  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:28.377600  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:30.876826  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:28.102245  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:30.601637  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:28.843575  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:31.343620  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:31.615877  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:33.616194  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:32.876932  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:34.877535  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:33.101370  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:35.601954  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:33.344119  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:35.345714  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:37.843395  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:35.617001  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:38.116259  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:40.116468  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:37.377700  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:39.876618  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:41.877026  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:38.100797  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:40.101381  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:40.344173  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:42.843994  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:42.116642  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:44.116961  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:43.877127  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:45.877640  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:42.601782  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:45.101453  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:47.101535  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:47.106719  242388 pod_ready.go:81] duration metric: took 4m0.017548884s waiting for pod "coredns-64897985d-n8275" in "kube-system" namespace to be "Ready" ...
	E0412 20:07:47.106749  242388 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 20:07:47.106760  242388 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.111547  242388 pod_ready.go:92] pod "etcd-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.111568  242388 pod_ready.go:81] duration metric: took 4.800194ms waiting for pod "etcd-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.111577  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.116360  242388 pod_ready.go:92] pod "kube-apiserver-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.116386  242388 pod_ready.go:81] duration metric: took 4.802187ms waiting for pod "kube-apiserver-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.116401  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.120884  242388 pod_ready.go:92] pod "kube-controller-manager-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.120904  242388 pod_ready.go:81] duration metric: took 4.495101ms waiting for pod "kube-controller-manager-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.120915  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-4ds2h" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:45.343597  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:47.343677  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:46.117375  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:48.616059  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:48.377421  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:50.876981  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:47.498845  242388 pod_ready.go:92] pod "kube-proxy-4ds2h" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.498874  242388 pod_ready.go:81] duration metric: took 377.951883ms waiting for pod "kube-proxy-4ds2h" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.498887  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.898940  242388 pod_ready.go:92] pod "kube-scheduler-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.898961  242388 pod_ready.go:81] duration metric: took 400.06795ms waiting for pod "kube-scheduler-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.898970  242388 pod_ready.go:38] duration metric: took 4m11.884749406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:07:47.898991  242388 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:07:47.899009  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0412 20:07:47.899050  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0412 20:07:47.926295  242388 cri.go:87] found id: "14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:47.926324  242388 cri.go:87] found id: ""
	I0412 20:07:47.926330  242388 logs.go:274] 1 containers: [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66]
	I0412 20:07:47.926372  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:47.929408  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0412 20:07:47.929469  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0412 20:07:47.953916  242388 cri.go:87] found id: "4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:47.953945  242388 cri.go:87] found id: ""
	I0412 20:07:47.953953  242388 logs.go:274] 1 containers: [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677]
	I0412 20:07:47.953996  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:47.957205  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0412 20:07:47.957265  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0412 20:07:47.982927  242388 cri.go:87] found id: "d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:47.982954  242388 cri.go:87] found id: ""
	I0412 20:07:47.982971  242388 logs.go:274] 1 containers: [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18]
	I0412 20:07:47.983015  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:47.986670  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0412 20:07:47.986733  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0412 20:07:48.013485  242388 cri.go:87] found id: "cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:48.013510  242388 cri.go:87] found id: ""
	I0412 20:07:48.013517  242388 logs.go:274] 1 containers: [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6]
	I0412 20:07:48.013560  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.016841  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0412 20:07:48.016907  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0412 20:07:48.042036  242388 cri.go:87] found id: "c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:48.042064  242388 cri.go:87] found id: ""
	I0412 20:07:48.042071  242388 logs.go:274] 1 containers: [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b]
	I0412 20:07:48.042114  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.045287  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0412 20:07:48.045346  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0412 20:07:48.072774  242388 cri.go:87] found id: ""
	I0412 20:07:48.072804  242388 logs.go:274] 0 containers: []
	W0412 20:07:48.072811  242388 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0412 20:07:48.072818  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0412 20:07:48.072884  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0412 20:07:48.101123  242388 cri.go:87] found id: "7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:48.101152  242388 cri.go:87] found id: ""
	I0412 20:07:48.101165  242388 logs.go:274] 1 containers: [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc]
	I0412 20:07:48.101210  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.104916  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0412 20:07:48.104978  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0412 20:07:48.131749  242388 cri.go:87] found id: "bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:48.131777  242388 cri.go:87] found id: ""
	I0412 20:07:48.131785  242388 logs.go:274] 1 containers: [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273]
	I0412 20:07:48.131844  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.135247  242388 logs.go:123] Gathering logs for describe nodes ...
	I0412 20:07:48.135275  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0412 20:07:48.220592  242388 logs.go:123] Gathering logs for etcd [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677] ...
	I0412 20:07:48.220630  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:48.254716  242388 logs.go:123] Gathering logs for coredns [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18] ...
	I0412 20:07:48.254755  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:48.283282  242388 logs.go:123] Gathering logs for storage-provisioner [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc] ...
	I0412 20:07:48.283320  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:48.313931  242388 logs.go:123] Gathering logs for kube-controller-manager [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273] ...
	I0412 20:07:48.313970  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:48.363501  242388 logs.go:123] Gathering logs for container status ...
	I0412 20:07:48.363547  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0412 20:07:48.397214  242388 logs.go:123] Gathering logs for kubelet ...
	I0412 20:07:48.397248  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0412 20:07:48.452454  242388 logs.go:123] Gathering logs for dmesg ...
	I0412 20:07:48.452497  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0412 20:07:48.482460  242388 logs.go:123] Gathering logs for kube-apiserver [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66] ...
	I0412 20:07:48.482499  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:48.515058  242388 logs.go:123] Gathering logs for kube-scheduler [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6] ...
	I0412 20:07:48.515095  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:48.551729  242388 logs.go:123] Gathering logs for kube-proxy [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b] ...
	I0412 20:07:48.551766  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:48.578067  242388 logs.go:123] Gathering logs for containerd ...
	I0412 20:07:48.578099  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0412 20:07:51.118753  242388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:51.129713  242388 api_server.go:71] duration metric: took 4m15.249643945s to wait for apiserver process to appear ...
	I0412 20:07:51.129749  242388 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:07:51.129776  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0412 20:07:51.129847  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0412 20:07:51.155410  242388 cri.go:87] found id: "14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:51.155441  242388 cri.go:87] found id: ""
	I0412 20:07:51.155449  242388 logs.go:274] 1 containers: [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66]
	I0412 20:07:51.155507  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.158934  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0412 20:07:51.159025  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0412 20:07:51.184706  242388 cri.go:87] found id: "4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:51.184740  242388 cri.go:87] found id: ""
	I0412 20:07:51.184749  242388 logs.go:274] 1 containers: [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677]
	I0412 20:07:51.184802  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.188125  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0412 20:07:51.188227  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0412 20:07:51.216662  242388 cri.go:87] found id: "d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:51.216698  242388 cri.go:87] found id: ""
	I0412 20:07:51.216708  242388 logs.go:274] 1 containers: [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18]
	I0412 20:07:51.216776  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.220143  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0412 20:07:51.220208  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0412 20:07:51.245862  242388 cri.go:87] found id: "cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:51.245888  242388 cri.go:87] found id: ""
	I0412 20:07:51.245896  242388 logs.go:274] 1 containers: [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6]
	I0412 20:07:51.245936  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.249163  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0412 20:07:51.249217  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0412 20:07:51.274359  242388 cri.go:87] found id: "c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:51.274382  242388 cri.go:87] found id: ""
	I0412 20:07:51.274392  242388 logs.go:274] 1 containers: [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b]
	I0412 20:07:51.274434  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.277776  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0412 20:07:51.277848  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0412 20:07:51.304608  242388 cri.go:87] found id: ""
	I0412 20:07:51.304639  242388 logs.go:274] 0 containers: []
	W0412 20:07:51.304646  242388 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0412 20:07:51.304654  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0412 20:07:51.304713  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0412 20:07:51.331875  242388 cri.go:87] found id: "7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:51.331910  242388 cri.go:87] found id: ""
	I0412 20:07:51.331919  242388 logs.go:274] 1 containers: [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc]
	I0412 20:07:51.331968  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.335475  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0412 20:07:51.335533  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0412 20:07:51.364993  242388 cri.go:87] found id: "bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:51.365031  242388 cri.go:87] found id: ""
	I0412 20:07:51.365039  242388 logs.go:274] 1 containers: [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273]
	I0412 20:07:51.365086  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.368235  242388 logs.go:123] Gathering logs for kube-proxy [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b] ...
	I0412 20:07:51.368261  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:51.394766  242388 logs.go:123] Gathering logs for containerd ...
	I0412 20:07:51.394798  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0412 20:07:51.434414  242388 logs.go:123] Gathering logs for container status ...
	I0412 20:07:51.434459  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0412 20:07:51.467170  242388 logs.go:123] Gathering logs for kubelet ...
	I0412 20:07:51.467201  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0412 20:07:51.523890  242388 logs.go:123] Gathering logs for describe nodes ...
	I0412 20:07:51.523931  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0412 20:07:51.608825  242388 logs.go:123] Gathering logs for coredns [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18] ...
	I0412 20:07:51.608866  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:51.638923  242388 logs.go:123] Gathering logs for kube-scheduler [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6] ...
	I0412 20:07:51.638959  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:51.694709  242388 logs.go:123] Gathering logs for kube-controller-manager [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273] ...
	I0412 20:07:51.694754  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:51.735150  242388 logs.go:123] Gathering logs for dmesg ...
	I0412 20:07:51.735190  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0412 20:07:51.764872  242388 logs.go:123] Gathering logs for kube-apiserver [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66] ...
	I0412 20:07:51.764910  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:51.798060  242388 logs.go:123] Gathering logs for etcd [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677] ...
	I0412 20:07:51.798099  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:51.831196  242388 logs.go:123] Gathering logs for storage-provisioner [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc] ...
	I0412 20:07:51.831235  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:49.842863  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:51.843863  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:50.616244  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:52.616502  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:55.116309  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:52.877111  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:55.377129  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:54.360139  242388 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:07:54.365043  242388 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:07:54.365929  242388 api_server.go:140] control plane version: v1.23.5
	I0412 20:07:54.365952  242388 api_server.go:130] duration metric: took 3.236196704s to wait for apiserver health ...
	I0412 20:07:54.365961  242388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:07:54.365980  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0412 20:07:54.366057  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0412 20:07:54.392193  242388 cri.go:87] found id: "14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:54.392229  242388 cri.go:87] found id: ""
	I0412 20:07:54.392238  242388 logs.go:274] 1 containers: [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66]
	I0412 20:07:54.392288  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.395591  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0412 20:07:54.395644  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0412 20:07:54.423066  242388 cri.go:87] found id: "4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:54.423101  242388 cri.go:87] found id: ""
	I0412 20:07:54.423109  242388 logs.go:274] 1 containers: [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677]
	I0412 20:07:54.423152  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.426420  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0412 20:07:54.426489  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0412 20:07:54.451827  242388 cri.go:87] found id: "d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:54.451857  242388 cri.go:87] found id: ""
	I0412 20:07:54.451865  242388 logs.go:274] 1 containers: [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18]
	I0412 20:07:54.451921  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.455198  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0412 20:07:54.455267  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0412 20:07:54.482696  242388 cri.go:87] found id: "cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:54.482727  242388 cri.go:87] found id: ""
	I0412 20:07:54.482738  242388 logs.go:274] 1 containers: [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6]
	I0412 20:07:54.482799  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.486206  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0412 20:07:54.486281  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0412 20:07:54.513262  242388 cri.go:87] found id: "c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:54.513289  242388 cri.go:87] found id: ""
	I0412 20:07:54.513296  242388 logs.go:274] 1 containers: [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b]
	I0412 20:07:54.513336  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.516728  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0412 20:07:54.516810  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0412 20:07:54.541344  242388 cri.go:87] found id: ""
	I0412 20:07:54.541369  242388 logs.go:274] 0 containers: []
	W0412 20:07:54.541376  242388 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0412 20:07:54.541383  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0412 20:07:54.541444  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0412 20:07:54.567592  242388 cri.go:87] found id: "7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:54.567616  242388 cri.go:87] found id: ""
	I0412 20:07:54.567622  242388 logs.go:274] 1 containers: [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc]
	I0412 20:07:54.567676  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.570863  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0412 20:07:54.570934  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0412 20:07:54.597122  242388 cri.go:87] found id: "bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:54.597152  242388 cri.go:87] found id: ""
	I0412 20:07:54.597163  242388 logs.go:274] 1 containers: [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273]
	I0412 20:07:54.597214  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.600606  242388 logs.go:123] Gathering logs for container status ...
	I0412 20:07:54.600635  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0412 20:07:54.630713  242388 logs.go:123] Gathering logs for kube-apiserver [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66] ...
	I0412 20:07:54.630756  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:54.661857  242388 logs.go:123] Gathering logs for kube-scheduler [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6] ...
	I0412 20:07:54.661892  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:54.696955  242388 logs.go:123] Gathering logs for kube-proxy [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b] ...
	I0412 20:07:54.697002  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:54.725596  242388 logs.go:123] Gathering logs for storage-provisioner [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc] ...
	I0412 20:07:54.725626  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:54.751160  242388 logs.go:123] Gathering logs for kube-controller-manager [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273] ...
	I0412 20:07:54.751192  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:54.788085  242388 logs.go:123] Gathering logs for containerd ...
	I0412 20:07:54.788125  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0412 20:07:54.828590  242388 logs.go:123] Gathering logs for kubelet ...
	I0412 20:07:54.828633  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0412 20:07:54.883562  242388 logs.go:123] Gathering logs for dmesg ...
	I0412 20:07:54.883616  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0412 20:07:54.914264  242388 logs.go:123] Gathering logs for describe nodes ...
	I0412 20:07:54.914318  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0412 20:07:54.995678  242388 logs.go:123] Gathering logs for etcd [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677] ...
	I0412 20:07:54.995716  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:55.028252  242388 logs.go:123] Gathering logs for coredns [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18] ...
	I0412 20:07:55.028285  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:57.562998  242388 system_pods.go:59] 7 kube-system pods found
	I0412 20:07:57.563046  242388 system_pods.go:61] "coredns-64897985d-n8275" [6288c440-7286-4371-887b-05bdd2c3ae03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0412 20:07:57.563056  242388 system_pods.go:61] "etcd-bridge-20220412195202-42006" [5b8eb204-1b53-40ed-99a0-9ad66b992a11] Running
	I0412 20:07:57.563061  242388 system_pods.go:61] "kube-apiserver-bridge-20220412195202-42006" [b3d2ad41-c353-4f4d-adec-8eb4e415a3a9] Running
	I0412 20:07:57.563068  242388 system_pods.go:61] "kube-controller-manager-bridge-20220412195202-42006" [60642473-00d1-4412-9acc-f3fca32da8d1] Running
	I0412 20:07:57.563074  242388 system_pods.go:61] "kube-proxy-4ds2h" [b20999c9-8e7e-4489-b3d7-d07d890ff182] Running
	I0412 20:07:57.563082  242388 system_pods.go:61] "kube-scheduler-bridge-20220412195202-42006" [0b786d2f-ce5c-481f-af32-fb5574748ff4] Running
	I0412 20:07:57.563089  242388 system_pods.go:61] "storage-provisioner" [0d99066c-431e-4568-adf0-f4d550abb732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0412 20:07:57.563103  242388 system_pods.go:74] duration metric: took 3.197136349s to wait for pod list to return data ...
	I0412 20:07:57.563119  242388 default_sa.go:34] waiting for default service account to be created ...
	I0412 20:07:57.565734  242388 default_sa.go:45] found service account: "default"
	I0412 20:07:57.565758  242388 default_sa.go:55] duration metric: took 2.633322ms for default service account to be created ...
	I0412 20:07:57.565767  242388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0412 20:07:57.570422  242388 system_pods.go:86] 7 kube-system pods found
	I0412 20:07:57.570457  242388 system_pods.go:89] "coredns-64897985d-n8275" [6288c440-7286-4371-887b-05bdd2c3ae03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0412 20:07:57.570464  242388 system_pods.go:89] "etcd-bridge-20220412195202-42006" [5b8eb204-1b53-40ed-99a0-9ad66b992a11] Running
	I0412 20:07:57.570469  242388 system_pods.go:89] "kube-apiserver-bridge-20220412195202-42006" [b3d2ad41-c353-4f4d-adec-8eb4e415a3a9] Running
	I0412 20:07:57.570474  242388 system_pods.go:89] "kube-controller-manager-bridge-20220412195202-42006" [60642473-00d1-4412-9acc-f3fca32da8d1] Running
	I0412 20:07:57.570478  242388 system_pods.go:89] "kube-proxy-4ds2h" [b20999c9-8e7e-4489-b3d7-d07d890ff182] Running
	I0412 20:07:57.570483  242388 system_pods.go:89] "kube-scheduler-bridge-20220412195202-42006" [0b786d2f-ce5c-481f-af32-fb5574748ff4] Running
	I0412 20:07:57.570488  242388 system_pods.go:89] "storage-provisioner" [0d99066c-431e-4568-adf0-f4d550abb732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0412 20:07:57.570494  242388 system_pods.go:126] duration metric: took 4.722384ms to wait for k8s-apps to be running ...
	I0412 20:07:57.570505  242388 system_svc.go:44] waiting for kubelet service to be running ....
	I0412 20:07:57.570548  242388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:07:57.581395  242388 system_svc.go:56] duration metric: took 10.877477ms WaitForService to wait for kubelet.
	I0412 20:07:57.581432  242388 kubeadm.go:548] duration metric: took 4m21.701368513s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0412 20:07:57.581476  242388 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:07:57.584483  242388 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:07:57.584522  242388 node_conditions.go:123] node cpu capacity is 8
	I0412 20:07:57.584534  242388 node_conditions.go:105] duration metric: took 3.052647ms to run NodePressure ...
	I0412 20:07:57.584546  242388 start.go:213] waiting for startup goroutines ...
	I0412 20:07:57.623316  242388 start.go:499] kubectl: 1.23.5, cluster: 1.23.5 (minor skew: 0)
	I0412 20:07:57.625886  242388 out.go:176] * Done! kubectl is now configured to use "bridge-20220412195202-42006" cluster and "default" namespace by default
	I0412 20:07:53.843924  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:56.343760  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:57.616305  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:59.616970  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:57.876917  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:59.876969  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:58.843532  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:01.343266  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:02.115904  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:04.116462  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:02.377072  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:04.377447  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:06.377927  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:03.343687  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:05.344519  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:07.844120  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:06.616342  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:09.116011  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:08.876715  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:10.876992  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:10.343283  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:12.344262  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:11.116286  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:13.116651  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:12.877639  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:15.377865  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:14.844233  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:16.844349  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:15.616009  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:17.616863  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:20.116620  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:17.877000  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:19.877332  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:21.877545  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:19.344311  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:21.843388  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:22.116816  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:24.616268  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:24.377891  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:26.876684  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:23.844010  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:26.343650  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:27.116520  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:29.615844  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:28.877006  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:30.877641  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:28.343803  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:30.843896  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:31.617165  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:34.116149  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:33.377445  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:35.876596  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:33.342972  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:35.343470  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:37.345152  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:36.116632  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:38.616288  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:37.877405  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:40.377447  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:39.844005  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:42.344565  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:40.617049  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:42.617248  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:45.116711  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:42.876734  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:44.877017  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:44.843371  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:47.343783  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:47.616263  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:49.616386  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:47.376581  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:49.377052  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:51.377414  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:49.343917  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:51.344008  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:52.117238  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:54.616379  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:53.877648  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:56.376551  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:53.843110  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:55.844092  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:57.116572  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:59.616687  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:58.376693  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:00.377390  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:58.343120  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:00.843928  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:02.116215  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:04.616429  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:02.876643  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:04.877491  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:03.343475  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:05.344253  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:07.843997  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:06.616538  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:08.616760  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:07.376877  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:09.377403  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:11.876753  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:10.343170  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:12.844102  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:10.616938  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:13.116240  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:13.877655  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:15.379867  248748 node_ready.go:38] duration metric: took 4m0.009449893s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:09:15.382455  248748 out.go:176] 
	W0412 20:09:15.382637  248748 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:09:15.382653  248748 out.go:241] * 
	W0412 20:09:15.383376  248748 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	78996594d04da       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   72ec8def5691d
	019c66def7622       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   72ec8def5691d
	d1642a69585f2       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   633802cf99325
	6cc69a6c92a9c       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   a58c9be88b91f
	e47ba7bc7187c       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   1038e52b21658
	f29f2d4e263bc       b2756210eeabf       4 minutes ago        Running             etcd                      0                   8b1dc4454ac4d
	e3d3ef830b73a       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   7042f76bd3470
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:04:30 UTC, end at Tue 2022-04-12 20:09:16 UTC. --
	Apr 12 20:04:49 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:04:49.505078930Z" level=info msg="StartContainer for \"6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133\" returns successfully"
	Apr 12 20:04:49 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:04:49.506994496Z" level=info msg="StartContainer for \"e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db\" returns successfully"
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.556892215Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.810640931Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-nt4pk,Uid:e0d683c7-40fd-43e1-ac82-a740e53a8513,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.817909285Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-xxqjk,Uid:306e6dc0-594c-4013-acc5-0fcbdf38806f,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.837013473Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/633802cf993258414601b7f1ffca58d0e5985f738a8bc33672c811660342e0fa pid=1795
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.843628917Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8 pid=1813
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.915895038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nt4pk,Uid:e0d683c7-40fd-43e1-ac82-a740e53a8513,Namespace:kube-system,Attempt:0,} returns sandbox id \"633802cf993258414601b7f1ffca58d0e5985f738a8bc33672c811660342e0fa\""
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.919012813Z" level=info msg="CreateContainer within sandbox \"633802cf993258414601b7f1ffca58d0e5985f738a8bc33672c811660342e0fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.938029687Z" level=info msg="CreateContainer within sandbox \"633802cf993258414601b7f1ffca58d0e5985f738a8bc33672c811660342e0fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b\""
	Apr 12 20:05:14 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:14.938764372Z" level=info msg="StartContainer for \"d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b\""
	Apr 12 20:05:15 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:15.006847845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-xxqjk,Uid:306e6dc0-594c-4013-acc5-0fcbdf38806f,Namespace:kube-system,Attempt:0,} returns sandbox id \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\""
	Apr 12 20:05:15 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:15.010558787Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Apr 12 20:05:15 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:15.032490428Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227\""
	Apr 12 20:05:15 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:15.032953391Z" level=info msg="StartContainer for \"d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b\" returns successfully"
	Apr 12 20:05:15 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:15.033222338Z" level=info msg="StartContainer for \"019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227\""
	Apr 12 20:05:15 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:05:15.292546965Z" level=info msg="StartContainer for \"019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227\" returns successfully"
	Apr 12 20:07:55 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:55.619837806Z" level=info msg="shim disconnected" id=019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227
	Apr 12 20:07:55 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:55.619911584Z" level=warning msg="cleaning up after shim disconnected" id=019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227 namespace=k8s.io
	Apr 12 20:07:55 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:55.619929568Z" level=info msg="cleaning up dead shim"
	Apr 12 20:07:55 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:55.631047951Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:07:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2468\n"
	Apr 12 20:07:55 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:55.998042784Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Apr 12 20:07:56 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:56.012382481Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\""
	Apr 12 20:07:56 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:56.012902078Z" level=info msg="StartContainer for \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\""
	Apr 12 20:07:56 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:07:56.183845940Z" level=info msg="StartContainer for \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220412200421-42006
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220412200421-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=old-k8s-version-20220412200421-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_04_59_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:04:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:08:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:08:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:08:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:08:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20220412200421-42006
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                0b57e9d3-0bbc-4976-a928-dc02ca892e39
	 Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	 Kernel Version:             5.13.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220412200421-42006                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                kindnet-xxqjk                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                kube-apiserver-old-k8s-version-20220412200421-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                kube-controller-manager-old-k8s-version-20220412200421-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                kube-proxy-nt4pk                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                kube-scheduler-old-k8s-version-20220412200421-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                              Message
	  ----    ------                   ----                   ----                                              -------
	  Normal  NodeHasSufficientMemory  4m28s (x8 over 4m28s)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m28s (x8 over 4m28s)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m28s (x7 over 4m28s)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m1s                   kube-proxy, old-k8s-version-20220412200421-42006  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959845] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007855] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.027949] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +6.444058] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007474] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[Apr12 20:09] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.967828] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.035871] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019954] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.943887] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023908] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5] <==
	* 2022-04-12 20:04:49.581771 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-04-12 20:04:49.581999 I | embed: listening for metrics on http://192.168.67.2:2381
	2022-04-12 20:04:49.582091 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-04-12 20:04:49.806733 I | raft: 8688e899f7831fc7 is starting a new election at term 1
	2022-04-12 20:04:49.806783 I | raft: 8688e899f7831fc7 became candidate at term 2
	2022-04-12 20:04:49.806798 I | raft: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	2022-04-12 20:04:49.806811 I | raft: 8688e899f7831fc7 became leader at term 2
	2022-04-12 20:04:49.806819 I | raft: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2022-04-12 20:04:49.807090 I | etcdserver: published {Name:old-k8s-version-20220412200421-42006 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2022-04-12 20:04:49.807114 I | embed: ready to serve client requests
	2022-04-12 20:04:49.807165 I | etcdserver: setting up the initial cluster version to 3.3
	2022-04-12 20:04:49.807314 I | embed: ready to serve client requests
	2022-04-12 20:04:49.807714 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-04-12 20:04:49.807811 I | etcdserver/api: enabled capabilities for version 3.3
	2022-04-12 20:04:49.808554 I | embed: serving client requests on 192.168.67.2:2379
	2022-04-12 20:04:49.808691 I | embed: serving client requests on 127.0.0.1:2379
	2022-04-12 20:04:54.979482 W | etcdserver: request "header:<ID:2289939807800189654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:221 >> failure:<>>" with result "size:14" took too long (127.000495ms) to execute
	2022-04-12 20:04:54.980336 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (131.368725ms) to execute
	2022-04-12 20:04:54.981355 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:aggregate-to-view\" " with result "range_response_count:0 size:4" took too long (185.420261ms) to execute
	2022-04-12 20:05:08.444522 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" " with result "range_response_count:1 size:203" took too long (237.985152ms) to execute
	2022-04-12 20:05:08.611060 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replication-controller\" " with result "range_response_count:0 size:5" took too long (156.655583ms) to execute
	2022-04-12 20:05:08.611112 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (156.642288ms) to execute
	2022-04-12 20:05:11.193931 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:0 size:5" took too long (179.101374ms) to execute
	2022-04-12 20:05:11.556922 W | etcdserver: request "header:<ID:2289939807800190189 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/deployment-controller\" mod_revision:266 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" value_size:178 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" > >>" with result "size:16" took too long (184.09372ms) to execute
	2022-04-12 20:05:11.557051 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:0 size:5" took too long (259.936755ms) to execute
	
	* 
	* ==> kernel <==
	*  20:09:16 up  2:51,  0 users,  load average: 0.76, 1.49, 1.84
	Linux old-k8s-version-20220412200421-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db] <==
	* I0412 20:04:53.817855       1 naming_controller.go:288] Starting NamingConditionController
	I0412 20:04:53.817876       1 establishing_controller.go:73] Starting EstablishingController
	I0412 20:04:53.817895       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0412 20:04:53.821058       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0412 20:04:53.886711       1 cache.go:39] Caches are synced for autoregister controller
	I0412 20:04:53.888960       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0412 20:04:53.894066       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 20:04:53.912646       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:04:54.785212       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0412 20:04:54.785323       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:04:54.785532       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:04:54.981976       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0412 20:04:54.989210       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:04:54.989520       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0412 20:04:55.602026       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:04:56.835537       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:04:57.115593       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0412 20:04:57.408794       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0412 20:04:57.409902       1 controller.go:606] quota admission added evaluator for: endpoints
	I0412 20:04:58.035069       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0412 20:04:58.723065       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0412 20:04:59.062703       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0412 20:05:14.419802       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:05:14.457130       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0412 20:05:14.798379       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1] <==
	* I0412 20:05:14.416160       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0412 20:05:14.416404       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0412 20:05:14.416449       1 shared_informer.go:204] Caches are synced for GC 
	I0412 20:05:14.416458       1 shared_informer.go:204] Caches are synced for stateful set 
	I0412 20:05:14.420747       1 shared_informer.go:204] Caches are synced for namespace 
	I0412 20:05:14.446207       1 log.go:172] [INFO] signed certificate with serial number 553674720293122649670790457411009586856850398380
	I0412 20:05:14.452389       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d91a3f48-91ea-4047-96eb-febc4fd5896f", APIVersion:"apps/v1", ResourceVersion:"198", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-nt4pk
	I0412 20:05:14.453892       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"58fcbd78-08ad-4c23-81c3-6b4bc4796f4f", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xxqjk
	E0412 20:05:14.485627       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d91a3f48-91ea-4047-96eb-febc4fd5896f", ResourceVersion:"198", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785390699, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014eb6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001683ec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014eb700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014eb720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014eb760)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0017e04b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016e8778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00168ede0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00099e7e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016e87b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0412 20:05:14.499652       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0412 20:05:14.512453       1 range_allocator.go:359] Set node old-k8s-version-20220412200421-42006 PodCIDR to [10.244.0.0/24]
	I0412 20:05:14.581829       1 shared_informer.go:204] Caches are synced for HPA 
	I0412 20:05:14.766250       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0412 20:05:14.796326       1 shared_informer.go:204] Caches are synced for deployment 
	I0412 20:05:14.802095       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c3850259-9414-497e-b19b-05b488cd9753", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0412 20:05:14.808727       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"1497655a-7413-453d-bf35-8edfda600b44", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-z6lnj
	I0412 20:05:14.815644       1 shared_informer.go:204] Caches are synced for disruption 
	I0412 20:05:14.815672       1 disruption.go:341] Sending events to api server.
	I0412 20:05:14.882180       1 shared_informer.go:204] Caches are synced for resource quota 
	I0412 20:05:14.920223       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0412 20:05:14.920251       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:05:14.920729       1 shared_informer.go:204] Caches are synced for resource quota 
	I0412 20:05:14.978270       1 shared_informer.go:204] Caches are synced for job 
	I0412 20:05:15.817797       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0412 20:05:15.924972       1 shared_informer.go:204] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b] <==
	* W0412 20:05:15.109854       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0412 20:05:15.118694       1 node.go:135] Successfully retrieved node IP: 192.168.67.2
	I0412 20:05:15.118739       1 server_others.go:149] Using iptables Proxier.
	I0412 20:05:15.119285       1 server.go:529] Version: v1.16.0
	I0412 20:05:15.119941       1 config.go:131] Starting endpoints config controller
	I0412 20:05:15.119963       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0412 20:05:15.119997       1 config.go:313] Starting service config controller
	I0412 20:05:15.120007       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0412 20:05:15.220204       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0412 20:05:15.220290       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133] <==
	* I0412 20:04:53.828463       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0412 20:04:53.829174       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0412 20:04:53.893487       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:04:53.893757       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:53.893903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:04:53.895116       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:53.895227       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:04:53.895262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:04:53.896417       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:04:53.896583       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:04:53.898962       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:04:53.899567       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:04:53.899864       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:04:54.895250       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:04:54.898563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:54.899824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:04:54.900936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:54.909762       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:04:54.911797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:04:54.914318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:04:54.915374       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:04:54.916368       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:04:54.923327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:04:54.982883       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:05:14.813397       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:04:30 UTC, end at Tue 2022-04-12 20:09:16 UTC. --
	Apr 12 20:07:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:13.793409     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:18 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:18.794164     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:23 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:23.794953     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:28 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:28.795701     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:33 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:33.796608     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:38 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:38.797466     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:43 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:43.798357     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:48 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:48.799178     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:53 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:53.800119     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:07:58 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:07:58.800926     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:03.801717     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:08 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:08.802567     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:13.803373     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:18 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:18.804283     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:23 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:23.805116     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:28 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:28.805892     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:33 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:33.806676     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:38 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:38.807515     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:43 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:43.808327     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:48 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:48.809183     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:53 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:53.810001     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:08:58 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:08:58.810811     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:09:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:09:03.811693     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:09:08 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:09:08.812542     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:09:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:09:13.813267     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-z6lnj storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-z6lnj storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-z6lnj storage-provisioner: exit status 1 (56.459876ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-z6lnj" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-z6lnj storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (295.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (299.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220412200510-42006 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5
E0412 20:05:31.558142   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:05:54.807863   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:05:58.260591   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:05:59.241847   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220412200510-42006 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5: exit status 80 (4m57.49785461s)

                                                
                                                
-- stdout --
	* [embed-certs-20220412200510-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node embed-certs-20220412200510-42006 in cluster embed-certs-20220412200510-42006
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 20:05:10.207661  255510 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:05:10.207948  255510 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:05:10.207958  255510 out.go:310] Setting ErrFile to fd 2...
	I0412 20:05:10.207967  255510 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:05:10.208212  255510 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:05:10.208649  255510 out.go:304] Setting JSON to false
	I0412 20:05:10.210864  255510 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10063,"bootTime":1649783847,"procs":410,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:05:10.210960  255510 start.go:125] virtualization: kvm guest
	I0412 20:05:10.214430  255510 out.go:176] * [embed-certs-20220412200510-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:05:10.216324  255510 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:05:10.214692  255510 notify.go:193] Checking for updates...
	I0412 20:05:10.218419  255510 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:05:10.220597  255510 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:05:10.222109  255510 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:05:10.223555  255510 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:05:10.224332  255510 config.go:178] Loaded profile config "bridge-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:05:10.224506  255510 config.go:178] Loaded profile config "no-preload-20220412200453-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:05:10.224641  255510 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:05:10.224704  255510 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:05:10.281563  255510 docker.go:137] docker version: linux-20.10.14
	I0412 20:05:10.281697  255510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:05:10.396548  255510 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:61 SystemTime:2022-04-12 20:05:10.319971855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:05:10.396646  255510 docker.go:254] overlay module found
	I0412 20:05:10.399039  255510 out.go:176] * Using the docker driver based on user configuration
	I0412 20:05:10.399076  255510 start.go:284] selected driver: docker
	I0412 20:05:10.399083  255510 start.go:801] validating driver "docker" against <nil>
	I0412 20:05:10.399108  255510 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:05:10.399185  255510 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:05:10.399220  255510 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:05:10.400945  255510 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:05:10.401807  255510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:05:10.521318  255510 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:true NGoroutines:61 SystemTime:2022-04-12 20:05:10.439672121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:05:10.521452  255510 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 20:05:10.521670  255510 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:05:10.523831  255510 out.go:176] * Using Docker driver with the root privilege
	I0412 20:05:10.523860  255510 cni.go:93] Creating CNI manager for ""
	I0412 20:05:10.523870  255510 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:05:10.523897  255510 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:05:10.523912  255510 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:05:10.523920  255510 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0412 20:05:10.523944  255510 start_flags.go:306] config:
	{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:05:10.525747  255510 out.go:176] * Starting control plane node embed-certs-20220412200510-42006 in cluster embed-certs-20220412200510-42006
	I0412 20:05:10.525796  255510 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:05:10.527282  255510 out.go:176] * Pulling base image ...
	I0412 20:05:10.527319  255510 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:05:10.527360  255510 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:05:10.527378  255510 cache.go:57] Caching tarball of preloaded images
	I0412 20:05:10.527462  255510 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:05:10.527721  255510 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:05:10.527740  255510 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:05:10.527895  255510 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:05:10.527935  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json: {Name:mk340cd1e14d7b6b3e19542f70587c269ae4156e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:10.581303  255510 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:05:10.581339  255510 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:05:10.581358  255510 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:05:10.581418  255510 start.go:352] acquiring machines lock for embed-certs-20220412200510-42006: {Name:mk64f255895db788ec660fe05e5b2f5e43e4987c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:05:10.581606  255510 start.go:356] acquired machines lock for "embed-certs-20220412200510-42006" in 156.73µs
	I0412 20:05:10.581647  255510 start.go:91] Provisioning new machine with config: &{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:05:10.581787  255510 start.go:131] createHost starting for "" (driver="docker")
	I0412 20:05:10.584317  255510 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0412 20:05:10.584708  255510 start.go:165] libmachine.API.Create for "embed-certs-20220412200510-42006" (driver="docker")
	I0412 20:05:10.584758  255510 client.go:168] LocalClient.Create starting
	I0412 20:05:10.584854  255510 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 20:05:10.584902  255510 main.go:134] libmachine: Decoding PEM data...
	I0412 20:05:10.584921  255510 main.go:134] libmachine: Parsing certificate...
	I0412 20:05:10.584991  255510 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 20:05:10.585011  255510 main.go:134] libmachine: Decoding PEM data...
	I0412 20:05:10.585028  255510 main.go:134] libmachine: Parsing certificate...
	I0412 20:05:10.585496  255510 cli_runner.go:164] Run: docker network inspect embed-certs-20220412200510-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 20:05:10.628175  255510 cli_runner.go:211] docker network inspect embed-certs-20220412200510-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 20:05:10.628255  255510 network_create.go:272] running [docker network inspect embed-certs-20220412200510-42006] to gather additional debugging logs...
	I0412 20:05:10.628287  255510 cli_runner.go:164] Run: docker network inspect embed-certs-20220412200510-42006
	W0412 20:05:10.669234  255510 cli_runner.go:211] docker network inspect embed-certs-20220412200510-42006 returned with exit code 1
	I0412 20:05:10.669267  255510 network_create.go:275] error running [docker network inspect embed-certs-20220412200510-42006]: docker network inspect embed-certs-20220412200510-42006: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-20220412200510-42006
	I0412 20:05:10.669282  255510 network_create.go:277] output of [docker network inspect embed-certs-20220412200510-42006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-20220412200510-42006
	
	** /stderr **
	I0412 20:05:10.669329  255510 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:05:10.713971  255510 network.go:240] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-a823a5d7e3fc IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a7:22:c4:86}}
	I0412 20:05:10.714922  255510 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc0003ce0f0] misses:0}
	I0412 20:05:10.714972  255510 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 20:05:10.714995  255510 network_create.go:115] attempt to create docker network embed-certs-20220412200510-42006 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0412 20:05:10.715064  255510 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true embed-certs-20220412200510-42006
	I0412 20:05:10.799425  255510 network_create.go:99] docker network embed-certs-20220412200510-42006 192.168.58.0/24 created
	I0412 20:05:10.799472  255510 kic.go:106] calculated static IP "192.168.58.2" for the "embed-certs-20220412200510-42006" container
	I0412 20:05:10.799555  255510 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 20:05:10.840779  255510 cli_runner.go:164] Run: docker volume create embed-certs-20220412200510-42006 --label name.minikube.sigs.k8s.io=embed-certs-20220412200510-42006 --label created_by.minikube.sigs.k8s.io=true
	I0412 20:05:10.881963  255510 oci.go:103] Successfully created a docker volume embed-certs-20220412200510-42006
	I0412 20:05:10.882073  255510 cli_runner.go:164] Run: docker run --rm --name embed-certs-20220412200510-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220412200510-42006 --entrypoint /usr/bin/test -v embed-certs-20220412200510-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 20:05:12.194750  255510 cli_runner.go:217] Completed: docker run --rm --name embed-certs-20220412200510-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220412200510-42006 --entrypoint /usr/bin/test -v embed-certs-20220412200510-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib: (1.312624338s)
	I0412 20:05:12.194788  255510 oci.go:107] Successfully prepared a docker volume embed-certs-20220412200510-42006
	I0412 20:05:12.194862  255510 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:05:12.194895  255510 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 20:05:12.194967  255510 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220412200510-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 20:05:23.173765  255510 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-20220412200510-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (10.978703266s)
	I0412 20:05:23.173803  255510 kic.go:188] duration metric: took 10.978904 seconds to extract preloaded images to volume
	W0412 20:05:23.173846  255510 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 20:05:23.173860  255510 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 20:05:23.173922  255510 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 20:05:23.269372  255510 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-20220412200510-42006 --name embed-certs-20220412200510-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-20220412200510-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-20220412200510-42006 --network embed-certs-20220412200510-42006 --ip 192.168.58.2 --volume embed-certs-20220412200510-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 20:05:24.133748  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Running}}
	I0412 20:05:24.171186  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:05:24.207962  255510 cli_runner.go:164] Run: docker exec embed-certs-20220412200510-42006 stat /var/lib/dpkg/alternatives/iptables
	I0412 20:05:24.282240  255510 oci.go:279] the created container "embed-certs-20220412200510-42006" has a running status.
	I0412 20:05:24.282287  255510 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa...
	I0412 20:05:24.374056  255510 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 20:05:24.479920  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:05:24.522586  255510 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 20:05:24.522614  255510 kic_runner.go:114] Args: [docker exec --privileged embed-certs-20220412200510-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 20:05:24.633376  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:05:24.670411  255510 machine.go:88] provisioning docker machine ...
	I0412 20:05:24.670459  255510 ubuntu.go:169] provisioning hostname "embed-certs-20220412200510-42006"
	I0412 20:05:24.670547  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:24.710826  255510 main.go:134] libmachine: Using SSH client type: native
	I0412 20:05:24.711060  255510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
	I0412 20:05:24.711092  255510 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220412200510-42006 && echo "embed-certs-20220412200510-42006" | sudo tee /etc/hostname
	I0412 20:05:24.841856  255510 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220412200510-42006
	
	I0412 20:05:24.841956  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:24.878534  255510 main.go:134] libmachine: Using SSH client type: native
	I0412 20:05:24.878704  255510 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49402 <nil> <nil>}
	I0412 20:05:24.878729  255510 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220412200510-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220412200510-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220412200510-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:05:25.000556  255510 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:05:25.000598  255510 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:05:25.000625  255510 ubuntu.go:177] setting up certificates
	I0412 20:05:25.000639  255510 provision.go:83] configureAuth start
	I0412 20:05:25.000711  255510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:05:25.034699  255510 provision.go:138] copyHostCerts
	I0412 20:05:25.034776  255510 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:05:25.034790  255510 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:05:25.034872  255510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:05:25.034968  255510 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:05:25.034981  255510 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:05:25.035017  255510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:05:25.035108  255510 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:05:25.035126  255510 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:05:25.035157  255510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:05:25.035224  255510 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220412200510-42006 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220412200510-42006]
	I0412 20:05:25.158456  255510 provision.go:172] copyRemoteCerts
	I0412 20:05:25.158527  255510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:05:25.158565  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:25.193672  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:05:25.284353  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:05:25.305257  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0412 20:05:25.323982  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:05:25.342420  255510 provision.go:86] duration metric: configureAuth took 341.761293ms
	I0412 20:05:25.342457  255510 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:05:25.342640  255510 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:05:25.342655  255510 machine.go:91] provisioned docker machine in 672.215563ms
	I0412 20:05:25.342664  255510 client.go:171] LocalClient.Create took 14.757890599s
	I0412 20:05:25.342687  255510 start.go:173] duration metric: libmachine.API.Create for "embed-certs-20220412200510-42006" took 14.757982725s
	I0412 20:05:25.342707  255510 start.go:306] post-start starting for "embed-certs-20220412200510-42006" (driver="docker")
	I0412 20:05:25.342719  255510 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:05:25.342770  255510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:05:25.342824  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:25.379475  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:05:25.480512  255510 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:05:25.483703  255510 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:05:25.483732  255510 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:05:25.483748  255510 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:05:25.483757  255510 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:05:25.483771  255510 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:05:25.483839  255510 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:05:25.483918  255510 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:05:25.483999  255510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:05:25.492212  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:05:25.514154  255510 start.go:309] post-start completed in 171.425587ms
	I0412 20:05:25.514610  255510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:05:25.548907  255510 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:05:25.549254  255510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:05:25.549314  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:25.584011  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:05:25.673218  255510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:05:25.677701  255510 start.go:134] duration metric: createHost completed in 15.095896528s
	I0412 20:05:25.677733  255510 start.go:81] releasing machines lock for "embed-certs-20220412200510-42006", held for 15.096105376s
	I0412 20:05:25.677817  255510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:05:25.715803  255510 ssh_runner.go:195] Run: systemctl --version
	I0412 20:05:25.715862  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:25.715880  255510 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:05:25.715948  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:05:25.753288  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:05:25.756453  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:05:25.859782  255510 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:05:25.870987  255510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:05:25.881625  255510 docker.go:183] disabling docker service ...
	I0412 20:05:25.881693  255510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:05:25.900907  255510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:05:25.911506  255510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:05:25.994224  255510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:05:26.084164  255510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:05:26.094793  255510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:05:26.111366  255510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:05:26.126256  255510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:05:26.133542  255510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:05:26.140938  255510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:05:26.229935  255510 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:05:26.304321  255510 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:05:26.304407  255510 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:05:26.309012  255510 start.go:462] Will wait 60s for crictl version
	I0412 20:05:26.309094  255510 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:05:26.342675  255510 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:05:26Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:05:37.390463  255510 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:05:37.415825  255510 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:05:37.415893  255510 ssh_runner.go:195] Run: containerd --version
	I0412 20:05:37.435457  255510 ssh_runner.go:195] Run: containerd --version
	I0412 20:05:37.458782  255510 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:05:37.458874  255510 cli_runner.go:164] Run: docker network inspect embed-certs-20220412200510-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:05:37.491724  255510 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0412 20:05:37.495778  255510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:05:37.508741  255510 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:05:37.508852  255510 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:05:37.508939  255510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:05:37.534348  255510 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:05:37.534381  255510 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:05:37.534434  255510 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:05:37.559657  255510 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:05:37.559682  255510 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:05:37.559724  255510 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:05:37.584869  255510 cni.go:93] Creating CNI manager for ""
	I0412 20:05:37.584896  255510 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:05:37.584910  255510 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:05:37.584933  255510 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220412200510-42006 NodeName:embed-certs-20220412200510-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:05:37.585098  255510 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220412200510-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:05:37.585190  255510 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220412200510-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:05:37.585257  255510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:05:37.593027  255510 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:05:37.593102  255510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:05:37.601999  255510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (577 bytes)
	I0412 20:05:37.616296  255510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:05:37.631155  255510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0412 20:05:37.644574  255510 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:05:37.647549  255510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:05:37.656949  255510 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006 for IP: 192.168.58.2
	I0412 20:05:37.657061  255510 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:05:37.657108  255510 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:05:37.657155  255510 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.key
	I0412 20:05:37.657171  255510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.crt with IP's: []
	I0412 20:05:37.806545  255510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.crt ...
	I0412 20:05:37.806580  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.crt: {Name:mk83254632282ae41b1ee44e8bd9a195a5f739a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:37.806826  255510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.key ...
	I0412 20:05:37.806844  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.key: {Name:mk3242a92bbb29b48a18b9f85929cf3e6e40a78f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:37.806954  255510 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041
	I0412 20:05:37.806971  255510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 20:05:38.195576  255510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt.cee25041 ...
	I0412 20:05:38.195611  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt.cee25041: {Name:mke827b83dc4e003893722c4a5232979ffdffee2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:38.195838  255510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041 ...
	I0412 20:05:38.195855  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041: {Name:mkfde4b31d8794baff8a5760a47ab049b96a3113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:38.195941  255510 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt
	I0412 20:05:38.196005  255510 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key
	I0412 20:05:38.196052  255510 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key
	I0412 20:05:38.196095  255510 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt with IP's: []
	I0412 20:05:38.306654  255510 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt ...
	I0412 20:05:38.306703  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt: {Name:mk42b32237ff9f21736e30432e1198f129754907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:38.306916  255510 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key ...
	I0412 20:05:38.306930  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key: {Name:mkd7f95c9b10213b4ded3c879ddf26995bb1d366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:05:38.307098  255510 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:05:38.307140  255510 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:05:38.307149  255510 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:05:38.307168  255510 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:05:38.307192  255510 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:05:38.307224  255510 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:05:38.307263  255510 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:05:38.307930  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:05:38.328217  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:05:38.347675  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:05:38.369411  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:05:38.392338  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:05:38.412744  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:05:38.434073  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:05:38.453142  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:05:38.473141  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:05:38.494205  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:05:38.514347  255510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:05:38.535445  255510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:05:38.549731  255510 ssh_runner.go:195] Run: openssl version
	I0412 20:05:38.556320  255510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:05:38.566441  255510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:05:38.570688  255510 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:05:38.570766  255510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:05:38.577621  255510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:05:38.588505  255510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:05:38.600228  255510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:05:38.604866  255510 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:05:38.604934  255510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:05:38.611639  255510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:05:38.621540  255510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:05:38.630776  255510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:05:38.634496  255510 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:05:38.634562  255510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:05:38.640178  255510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:05:38.648535  255510 kubeadm.go:391] StartCluster: {Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:05:38.648629  255510 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:05:38.648683  255510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:05:38.675948  255510 cri.go:87] found id: ""
	I0412 20:05:38.676040  255510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:05:38.684345  255510 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:05:38.692673  255510 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:05:38.692740  255510 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:05:38.700766  255510 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:05:38.700815  255510 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:05:39.004549  255510 out.go:203]   - Generating certificates and keys ...
	I0412 20:05:41.460901  255510 out.go:203]   - Booting up control plane ...
	I0412 20:05:54.020282  255510 out.go:203]   - Configuring RBAC rules ...
	I0412 20:05:54.434408  255510 cni.go:93] Creating CNI manager for ""
	I0412 20:05:54.434432  255510 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:05:54.436508  255510 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:05:54.436573  255510 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:05:54.440339  255510 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:05:54.440365  255510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:05:54.454937  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:05:55.235591  255510 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:05:55.235660  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:55.235673  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=embed-certs-20220412200510-42006 minikube.k8s.io/updated_at=2022_04_12T20_05_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:55.243207  255510 ops.go:34] apiserver oom_adj: -16
	I0412 20:05:55.311726  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:55.868912  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:56.369112  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:56.869254  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:57.369276  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:57.869064  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:58.369311  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:58.869252  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:59.369238  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:05:59.869253  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:00.368809  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:00.868981  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:01.369289  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:01.869156  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:02.369270  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:02.869329  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:03.368709  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:03.869055  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:04.369376  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:04.869307  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:05.368930  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:05.869319  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:06.369282  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:06.869309  255510 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:06:06.931039  255510 kubeadm.go:1020] duration metric: took 11.69544454s to wait for elevateKubeSystemPrivileges.
	I0412 20:06:06.931078  255510 kubeadm.go:393] StartCluster complete in 28.28255204s
	I0412 20:06:06.931102  255510 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:06:06.931210  255510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:06:06.932955  255510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:06:07.449539  255510 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220412200510-42006" rescaled to 1
	I0412 20:06:07.449614  255510 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:06:07.452260  255510 out.go:176] * Verifying Kubernetes components...
	I0412 20:06:07.449687  255510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:06:07.452334  255510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:06:07.449692  255510 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 20:06:07.449927  255510 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:06:07.452469  255510 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220412200510-42006"
	I0412 20:06:07.452498  255510 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220412200510-42006"
	W0412 20:06:07.452510  255510 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:06:07.452517  255510 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220412200510-42006"
	I0412 20:06:07.452559  255510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220412200510-42006"
	I0412 20:06:07.452570  255510 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:06:07.452962  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:06:07.453152  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:06:07.514238  255510 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:06:07.514446  255510 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:06:07.514473  255510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:06:07.514628  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:06:07.518447  255510 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220412200510-42006"
	W0412 20:06:07.518476  255510 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:06:07.518502  255510 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:06:07.518829  255510 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:06:07.567523  255510 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:06:07.567552  255510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:06:07.567617  255510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:06:07.570330  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:06:07.608852  255510 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:06:07.609304  255510 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:06:07.619423  255510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49402 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:06:07.700538  255510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:06:07.802835  255510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:06:08.096113  255510 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0412 20:06:08.230624  255510 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 20:06:08.230661  255510 addons.go:417] enableAddons completed in 780.984148ms
	I0412 20:06:09.623139  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:12.116879  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:14.615820  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:16.616975  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:19.117800  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:21.616048  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:23.616340  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:26.115959  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:28.116371  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:30.616559  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:33.116290  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:35.616400  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:37.616983  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:40.116038  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:42.116983  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:44.616790  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:47.116435  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:49.117177  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:51.118411  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:53.616135  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:55.616256  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:57.616496  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:59.616535  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:01.616795  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:04.116517  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:06.116969  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:08.117491  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:10.617366  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:13.116482  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:15.616860  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:17.616909  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:20.116362  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:22.116777  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:24.117192  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:26.616625  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:29.116436  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:31.615877  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:33.616194  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:35.617001  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:38.116259  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:40.116468  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:42.116642  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:44.116961  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:46.117375  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:48.616059  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:50.616244  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:52.616502  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:55.116309  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:57.616305  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:59.616970  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:02.115904  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:04.116462  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:06.616342  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:09.116011  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:11.116286  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:13.116651  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:15.616009  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:17.616863  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:20.116620  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:22.116816  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:24.616268  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:27.116520  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:29.615844  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:31.617165  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:34.116149  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:36.116632  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:38.616288  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:40.617049  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:42.617248  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:45.116711  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:47.616263  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:49.616386  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:52.117238  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:54.616379  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:57.116572  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:59.616687  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:02.116215  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:04.616429  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:06.616538  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:08.616760  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:10.616938  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:13.116240  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:15.616955  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:17.617162  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:20.116251  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:22.116450  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:24.616869  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:27.116619  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:29.616848  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:32.116785  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:34.617051  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:37.116661  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:39.116703  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:41.116866  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:43.616290  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:45.617195  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:48.116600  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:50.617008  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:53.116012  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:55.116667  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:57.616022  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:59.617067  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:02.116700  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:04.616203  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:07.116257  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:07.618422  255510 node_ready.go:38] duration metric: took 4m0.009531174s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:10:07.620809  255510 out.go:176] 
	W0412 20:10:07.620921  255510 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:10:07.620935  255510 out.go:241] * 
	* 
	W0412 20:10:07.621615  255510 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:10:07.623885  255510 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:172: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p embed-certs-20220412200510-42006 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220412200510-42006
helpers_test.go:235: (dbg) docker inspect embed-certs-20220412200510-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7",
	        "Created": "2022-04-12T20:05:23.305199436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:05:24.124628513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hosts",
	        "LogPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7-json.log",
	        "Name": "/embed-certs-20220412200510-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220412200510-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220412200510-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220412200510-42006",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220412200510-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220412200510-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfc6cecb94535d9fe135b877fee8b93f35d43a7969a073acac3b2c920f4dbb93",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cfc6cecb9453",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220412200510-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "340eb3625ebd",
	                        "embed-certs-20220412200510-42006"
	                    ],
	                    "NetworkID": "4ace6a0fae231d855dc7c20348778126fda239556e97939a30b4df667ae930f8",
	                    "EndpointID": "c940297a63e2c35df1a11c0d38d5e5fab82464350b8665dcb6e65be5ac8cc428",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                 Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:57 UTC | Tue, 12 Apr 2022 19:55:57 UTC |
	|         | custom-weave-20220412195203-42006                 |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| start   | -p                                                | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:52:03 UTC | Tue, 12 Apr 2022 19:56:06 UTC |
	|         | cert-expiration-20220412195203-42006              |                                         |         |         |                               |                               |
	|         | --memory=2048 --cert-expiration=3m                |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| delete  | -p                                                | custom-weave-20220412195203-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:56:06 UTC | Tue, 12 Apr 2022 19:56:09 UTC |
	|         | custom-weave-20220412195203-42006                 |                                         |         |         |                               |                               |
	| start   | -p cilium-20220412195203-42006                    | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:55:47 UTC | Tue, 12 Apr 2022 19:57:10 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=cilium --driver=docker                      |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p cilium-20220412195203-42006                    | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:15 UTC | Tue, 12 Apr 2022 19:57:15 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| delete  | -p cilium-20220412195203-42006                    | cilium-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:26 UTC | Tue, 12 Apr 2022 19:57:29 UTC |
	| start   | -p                                                | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:57:29 UTC | Tue, 12 Apr 2022 19:58:30 UTC |
	|         | enable-default-cni-20220412195202-42006           |                                         |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --enable-default-cni=true                         |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p                                                | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:58:31 UTC | Tue, 12 Apr 2022 19:58:31 UTC |
	|         | enable-default-cni-20220412195202-42006           |                                         |         |         |                               |                               |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| start   | -p                                                | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:06 UTC | Tue, 12 Apr 2022 19:59:21 UTC |
	|         | cert-expiration-20220412195203-42006              |                                         |         |         |                               |                               |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --cert-expiration=8760h                           |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| delete  | -p                                                | cert-expiration-20220412195203-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 19:59:21 UTC | Tue, 12 Apr 2022 19:59:24 UTC |
	|         | cert-expiration-20220412195203-42006              |                                         |         |         |                               |                               |
	| -p      | pause-20220412195428-42006                        | pause-20220412195428-42006              | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:37 UTC | Tue, 12 Apr 2022 20:02:38 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p pause-20220412195428-42006                     | pause-20220412195428-42006              | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:39 UTC | Tue, 12 Apr 2022 20:02:42 UTC |
	| -p      | kindnet-20220412195202-42006                      | kindnet-20220412195202-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:17 UTC | Tue, 12 Apr 2022 20:04:18 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p                                                | kindnet-20220412195202-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:19 UTC | Tue, 12 Apr 2022 20:04:21 UTC |
	|         | kindnet-20220412195202-42006                      |                                         |         |         |                               |                               |
	| -p      | enable-default-cni-20220412195202-42006           | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:48 UTC | Tue, 12 Apr 2022 20:04:49 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p                                                | enable-default-cni-20220412195202-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:50 UTC | Tue, 12 Apr 2022 20:04:53 UTC |
	|         | enable-default-cni-20220412195202-42006           |                                         |         |         |                               |                               |
	| -p      | calico-20220412195203-42006                       | calico-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:05:03 UTC | Tue, 12 Apr 2022 20:05:05 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	| delete  | -p calico-20220412195203-42006                    | calico-20220412195203-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:05:05 UTC | Tue, 12 Apr 2022 20:05:10 UTC |
	| start   | -p                                                | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:04:53 UTC | Tue, 12 Apr 2022 20:06:07 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                         |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                         |         |         |                               |                               |
	|         | --driver=docker                                   |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                 |                                         |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:16 UTC | Tue, 12 Apr 2022 20:06:17 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                         |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                         |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:17 UTC | Tue, 12 Apr 2022 20:06:37 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                         |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20220412200453-42006         | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:37 UTC | Tue, 12 Apr 2022 20:06:38 UTC |
	|         | no-preload-20220412200453-42006                   |                                         |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                         |         |         |                               |                               |
	| start   | -p bridge-20220412195202-42006                    | bridge-20220412195202-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:42 UTC | Tue, 12 Apr 2022 20:07:57 UTC |
	|         | --memory=2048                                     |                                         |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                         |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                         |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                      |                                         |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                         |         |         |                               |                               |
	| ssh     | -p bridge-20220412195202-42006                    | bridge-20220412195202-42006             | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:07:57 UTC | Tue, 12 Apr 2022 20:07:58 UTC |
	|         | pgrep -a kubelet                                  |                                         |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006    | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:09:15 UTC | Tue, 12 Apr 2022 20:09:16 UTC |
	|         | logs -n 25                                        |                                         |         |         |                               |                               |
	|---------|---------------------------------------------------|-----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:06:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:06:38.070775  262043 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:06:38.070924  262043 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:06:38.070934  262043 out.go:310] Setting ErrFile to fd 2...
	I0412 20:06:38.070939  262043 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:06:38.071052  262043 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:06:38.071305  262043 out.go:304] Setting JSON to false
	I0412 20:06:38.072898  262043 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10151,"bootTime":1649783847,"procs":578,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:06:38.072978  262043 start.go:125] virtualization: kvm guest
	I0412 20:06:38.076134  262043 out.go:176] * [no-preload-20220412200453-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:06:38.078061  262043 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:06:38.076319  262043 notify.go:193] Checking for updates...
	I0412 20:06:38.079814  262043 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:06:38.081760  262043 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:06:38.083632  262043 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:06:38.085370  262043 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:06:38.085992  262043 config.go:178] Loaded profile config "no-preload-20220412200453-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:06:38.086634  262043 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:06:38.132805  262043 docker.go:137] docker version: linux-20.10.14
	I0412 20:06:38.132930  262043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:06:38.235912  262043 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:06:38.16523747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:06:38.236034  262043 docker.go:254] overlay module found
	I0412 20:06:38.238794  262043 out.go:176] * Using the docker driver based on existing profile
	I0412 20:06:38.238830  262043 start.go:284] selected driver: docker
	I0412 20:06:38.238836  262043 start.go:801] validating driver "docker" against &{Name:no-preload-20220412200453-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true s
ystem_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:06:38.238961  262043 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:06:38.239009  262043 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:06:38.239032  262043 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:06:38.240836  262043 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:06:38.241472  262043 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:06:38.341391  262043 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:06:38.273881484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:06:38.341566  262043 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:06:38.341672  262043 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:06:38.344860  262043 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:06:38.345002  262043 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:06:38.345035  262043 cni.go:93] Creating CNI manager for ""
	I0412 20:06:38.345045  262043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:06:38.345072  262043 start_flags.go:306] config:
	{Name:no-preload-20220412200453-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:06:38.347242  262043 out.go:176] * Starting control plane node no-preload-20220412200453-42006 in cluster no-preload-20220412200453-42006
	I0412 20:06:38.347275  262043 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:06:38.348898  262043 out.go:176] * Pulling base image ...
	I0412 20:06:38.348934  262043 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:06:38.348973  262043 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:06:38.349104  262043 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/config.json ...
	I0412 20:06:38.349253  262043 cache.go:107] acquiring lock: {Name:mk62ec854ac97fe36974639873696d539b0701d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349259  262043 cache.go:107] acquiring lock: {Name:mk2bda950897038ca1478b3a7163d8ac0f3417b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349371  262043 cache.go:107] acquiring lock: {Name:mkf0415b3ed7938a96d14f1e7cce50737ac15575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349386  262043 cache.go:107] acquiring lock: {Name:mk6dc1ee3b9a5f568e0933515ea79a17a4e49320 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349405  262043 cache.go:107] acquiring lock: {Name:mk5210dd2f9d4dcb1bae57090039fdcf65f204ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349414  262043 cache.go:107] acquiring lock: {Name:mkb4e117321415b81dd2df649b67db215b4b34e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349462  262043 cache.go:107] acquiring lock: {Name:mke367e34b80546a2c751cf2682a4715709b415f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349519  262043 cache.go:107] acquiring lock: {Name:mk4b40f363fb59846cd134c4150ff1979bf7055a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.349588  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0412 20:06:38.349601  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 exists
	I0412 20:06:38.349612  262043 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 371.282µs
	I0412 20:06:38.349618  262043 cache.go:96] cache image "k8s.gcr.io/pause:3.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6" took 311.517µs
	I0412 20:06:38.349630  262043 cache.go:80] save to tar file k8s.gcr.io/pause:3.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/pause_3.6 succeeded
	I0412 20:06:38.349626  262043 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0412 20:06:38.349642  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6-rc.0 exists
	I0412 20:06:38.349651  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6-rc.0 exists
	I0412 20:06:38.349656  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6-rc.0 exists
	I0412 20:06:38.349672  262043 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6-rc.0" took 290.932µs
	I0412 20:06:38.349687  262043 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6-rc.0" took 321.114µs
	I0412 20:06:38.349688  262043 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6-rc.0" took 277.58µs
	I0412 20:06:38.349695  262043 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-controller-manager_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349702  262043 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-scheduler_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349704  262043 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-apiserver_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349731  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 exists
	I0412 20:06:38.349747  262043 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.6" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6" took 400.509µs
	I0412 20:06:38.349752  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6-rc.0 exists
	I0412 20:06:38.349776  262043 cache.go:115] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 exists
	I0412 20:06:38.349782  262043 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.23.6-rc.0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6-rc.0" took 545.75µs
	I0412 20:06:38.349786  262043 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.1-0" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0" took 478.693µs
	I0412 20:06:38.349792  262043 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.23.6-rc.0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/kube-proxy_v1.23.6-rc.0 succeeded
	I0412 20:06:38.349795  262043 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.1-0 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/etcd_3.5.1-0 succeeded
	I0412 20:06:38.349762  262043 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.6 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/images/amd64/k8s.gcr.io/coredns/coredns_v1.8.6 succeeded
	I0412 20:06:38.349818  262043 cache.go:87] Successfully saved all images to host disk.
	I0412 20:06:38.397946  262043 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:06:38.397982  262043 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:06:38.397999  262043 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:06:38.398046  262043 start.go:352] acquiring machines lock for no-preload-20220412200453-42006: {Name:mk5e55d06e0b09ff05f6bc84f5bd170846683246 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:06:38.398148  262043 start.go:356] acquired machines lock for "no-preload-20220412200453-42006" in 81.316µs
	I0412 20:06:38.398172  262043 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:06:38.398178  262043 fix.go:55] fixHost starting: 
	I0412 20:06:38.398439  262043 cli_runner.go:164] Run: docker container inspect no-preload-20220412200453-42006 --format={{.State.Status}}
	I0412 20:06:38.434741  262043 fix.go:103] recreateIfNeeded on no-preload-20220412200453-42006: state=Stopped err=<nil>
	W0412 20:06:38.434785  262043 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:06:35.616400  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:37.616983  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:40.116038  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:37.877798  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:40.377081  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:38.602879  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:41.103081  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:38.438292  262043 out.go:176] * Restarting existing docker container for "no-preload-20220412200453-42006" ...
	I0412 20:06:38.438387  262043 cli_runner.go:164] Run: docker start no-preload-20220412200453-42006
	I0412 20:06:38.850014  262043 cli_runner.go:164] Run: docker container inspect no-preload-20220412200453-42006 --format={{.State.Status}}
	I0412 20:06:38.886204  262043 kic.go:416] container "no-preload-20220412200453-42006" state is running.
	I0412 20:06:38.886611  262043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220412200453-42006
	I0412 20:06:38.923268  262043 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/config.json ...
	I0412 20:06:38.923827  262043 machine.go:88] provisioning docker machine ...
	I0412 20:06:38.923885  262043 ubuntu.go:169] provisioning hostname "no-preload-20220412200453-42006"
	I0412 20:06:38.923971  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:38.962118  262043 main.go:134] libmachine: Using SSH client type: native
	I0412 20:06:38.962338  262043 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49407 <nil> <nil>}
	I0412 20:06:38.962366  262043 main.go:134] libmachine: About to run SSH command:
	sudo hostname no-preload-20220412200453-42006 && echo "no-preload-20220412200453-42006" | sudo tee /etc/hostname
	I0412 20:06:38.963022  262043 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58218->127.0.0.1:49407: read: connection reset by peer
	I0412 20:06:42.098680  262043 main.go:134] libmachine: SSH cmd err, output: <nil>: no-preload-20220412200453-42006
	
	I0412 20:06:42.098774  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.136233  262043 main.go:134] libmachine: Using SSH client type: native
	I0412 20:06:42.136411  262043 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49407 <nil> <nil>}
	I0412 20:06:42.136446  262043 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-20220412200453-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-20220412200453-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-20220412200453-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:06:42.256294  262043 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:06:42.256326  262043 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:06:42.256353  262043 ubuntu.go:177] setting up certificates
	I0412 20:06:42.256366  262043 provision.go:83] configureAuth start
	I0412 20:06:42.256422  262043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220412200453-42006
	I0412 20:06:42.292692  262043 provision.go:138] copyHostCerts
	I0412 20:06:42.292765  262043 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:06:42.292779  262043 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:06:42.292851  262043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:06:42.292945  262043 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:06:42.292956  262043 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:06:42.292982  262043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:06:42.293044  262043 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:06:42.293052  262043 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:06:42.293073  262043 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:06:42.293136  262043 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.no-preload-20220412200453-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube no-preload-20220412200453-42006]
	I0412 20:06:42.358423  262043 provision.go:172] copyRemoteCerts
	I0412 20:06:42.358486  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:06:42.358525  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.395317  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.484628  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:06:42.504389  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0412 20:06:42.522915  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:06:42.542117  262043 provision.go:86] duration metric: configureAuth took 285.73544ms
	I0412 20:06:42.542154  262043 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:06:42.542377  262043 config.go:178] Loaded profile config "no-preload-20220412200453-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:06:42.542394  262043 machine.go:91] provisioned docker machine in 3.618527106s
	I0412 20:06:42.542402  262043 start.go:306] post-start starting for "no-preload-20220412200453-42006" (driver="docker")
	I0412 20:06:42.542415  262043 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:06:42.542453  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:06:42.542495  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.578654  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.667884  262043 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:06:42.670582  262043 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:06:42.670604  262043 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:06:42.670613  262043 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:06:42.670620  262043 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:06:42.670631  262043 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:06:42.670678  262043 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:06:42.670745  262043 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:06:42.670826  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:06:42.679250  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:06:42.698560  262043 start.go:309] post-start completed in 156.135756ms
	I0412 20:06:42.698634  262043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:06:42.698705  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.735208  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.820837  262043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:06:42.824992  262043 fix.go:57] fixHost completed within 4.426806364s
	I0412 20:06:42.825026  262043 start.go:81] releasing machines lock for "no-preload-20220412200453-42006", held for 4.42686368s
	I0412 20:06:42.825125  262043 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-20220412200453-42006
	I0412 20:06:42.860349  262043 ssh_runner.go:195] Run: systemctl --version
	I0412 20:06:42.860405  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.860425  262043 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:06:42.860497  262043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-20220412200453-42006
	I0412 20:06:42.899252  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:42.899693  262043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49407 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/no-preload-20220412200453-42006/id_rsa Username:docker}
	I0412 20:06:43.008117  262043 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:06:43.021129  262043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:06:43.031539  262043 docker.go:183] disabling docker service ...
	I0412 20:06:43.031610  262043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:06:43.042865  262043 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:06:43.052974  262043 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:06:42.116983  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:44.616790  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:42.877160  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:45.376791  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:43.601289  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:45.601448  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:43.136830  262043 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:06:43.212117  262043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:06:43.222157  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:06:43.235875  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:06:43.250199  262043 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:06:43.257113  262043 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:06:43.263893  262043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:06:43.341312  262043 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:06:43.418167  262043 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:06:43.418236  262043 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:06:43.422203  262043 start.go:462] Will wait 60s for crictl version
	I0412 20:06:43.422257  262043 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:06:43.450330  262043 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:06:43Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:06:47.116435  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:49.117177  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:47.377539  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:49.876781  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:51.877708  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:48.101289  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:50.101523  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:52.101737  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:54.498500  262043 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:06:54.523682  262043 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:06:54.523744  262043 ssh_runner.go:195] Run: containerd --version
	I0412 20:06:54.544706  262043 ssh_runner.go:195] Run: containerd --version
	I0412 20:06:54.569217  262043 out.go:176] * Preparing Kubernetes v1.23.6-rc.0 on containerd 1.5.10 ...
	I0412 20:06:54.569293  262043 cli_runner.go:164] Run: docker network inspect no-preload-20220412200453-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:06:54.608131  262043 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:06:54.611871  262043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:06:51.118411  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:53.616135  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:54.376925  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:56.377007  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:54.101816  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:56.601365  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:54.624326  262043 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:06:54.624409  262043 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:06:54.624470  262043 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:06:54.650522  262043 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:06:54.650556  262043 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:06:54.650602  262043 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:06:54.676728  262043 cni.go:93] Creating CNI manager for ""
	I0412 20:06:54.676761  262043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:06:54.676777  262043 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:06:54.676797  262043 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.6-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-20220412200453-42006 NodeName:no-preload-20220412200453-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs Cl
ientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:06:54.676953  262043 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "no-preload-20220412200453-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:06:54.677056  262043 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=no-preload-20220412200453-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:06:54.677120  262043 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6-rc.0
	I0412 20:06:54.685668  262043 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:06:54.685760  262043 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:06:54.693734  262043 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0412 20:06:54.708454  262043 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0412 20:06:54.722148  262043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2065 bytes)
	I0412 20:06:54.735859  262043 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:06:54.738901  262043 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:06:54.748842  262043 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006 for IP: 192.168.49.2
	I0412 20:06:54.748963  262043 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:06:54.749000  262043 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:06:54.749075  262043 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.key
	I0412 20:06:54.749132  262043 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/apiserver.key.dd3b5fb2
	I0412 20:06:54.749166  262043 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/proxy-client.key
	I0412 20:06:54.749256  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:06:54.749286  262043 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:06:54.749298  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:06:54.749321  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:06:54.749354  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:06:54.749382  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:06:54.749425  262043 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:06:54.750018  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:06:54.769182  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0412 20:06:54.789702  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:06:54.809714  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:06:54.828243  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:06:54.846446  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:06:54.865190  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:06:54.885291  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:06:54.905894  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:06:54.926143  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:06:54.945078  262043 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:06:54.963695  262043 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:06:54.977881  262043 ssh_runner.go:195] Run: openssl version
	I0412 20:06:54.983989  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:06:54.993645  262043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:06:54.997307  262043 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:06:54.997360  262043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:06:55.002865  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:06:55.011027  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:06:55.019101  262043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:06:55.022548  262043 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:06:55.022605  262043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:06:55.027910  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:06:55.035322  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:06:55.043746  262043 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:06:55.047043  262043 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:06:55.047115  262043 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:06:55.052304  262043 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:06:55.059836  262043 kubeadm.go:391] StartCluster: {Name:no-preload-20220412200453-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:no-preload-20220412200453-42006 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:06:55.059954  262043 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:06:55.059998  262043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:06:55.087591  262043 cri.go:87] found id: "902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b"
	I0412 20:06:55.087619  262043 cri.go:87] found id: "663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e"
	I0412 20:06:55.087626  262043 cri.go:87] found id: "6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a"
	I0412 20:06:55.087632  262043 cri.go:87] found id: "359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f"
	I0412 20:06:55.087638  262043 cri.go:87] found id: "be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454"
	I0412 20:06:55.087644  262043 cri.go:87] found id: "9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159"
	I0412 20:06:55.087650  262043 cri.go:87] found id: "7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50"
	I0412 20:06:55.087655  262043 cri.go:87] found id: "28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144"
	I0412 20:06:55.087661  262043 cri.go:87] found id: ""
	I0412 20:06:55.087701  262043 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:06:55.104645  262043 cri.go:114] JSON = null
	W0412 20:06:55.104706  262043 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0412 20:06:55.104768  262043 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:06:55.113235  262043 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:06:55.113267  262043 kubeadm.go:601] restartCluster start
	I0412 20:06:55.113330  262043 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:06:55.121394  262043 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.122364  262043 kubeconfig.go:116] verify returned: extract IP: "no-preload-20220412200453-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:06:55.122978  262043 kubeconfig.go:127] "no-preload-20220412200453-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:06:55.123815  262043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:06:55.125557  262043 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:06:55.132989  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.133058  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.141992  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.342388  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.342462  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.351633  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.542896  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.542982  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.552707  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.742924  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.743049  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.752665  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.942906  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:55.943016  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:55.952455  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.142683  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.142778  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.152261  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.342577  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.342673  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.352902  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.542076  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.542180  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.551444  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.742688  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.742769  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.752796  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:56.942936  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:56.943045  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:56.952305  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.142573  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.142664  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.151862  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.342121  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.342210  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.351557  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.542857  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.542941  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.552265  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.742615  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.742695  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.752050  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:57.942284  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:57.942397  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:57.951581  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:55.616256  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:57.616496  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:59.616535  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:06:58.377120  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:00.876752  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:06:59.101887  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:01.601425  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:06:58.142257  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:58.142347  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:58.151550  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.151581  262043 api_server.go:165] Checking apiserver status ...
	I0412 20:06:58.151623  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:06:58.159939  262043 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.159969  262043 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:06:58.159976  262043 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:06:58.159990  262043 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:06:58.160053  262043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:06:58.188974  262043 cri.go:87] found id: "902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b"
	I0412 20:06:58.189006  262043 cri.go:87] found id: "663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e"
	I0412 20:06:58.189015  262043 cri.go:87] found id: "6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a"
	I0412 20:06:58.189027  262043 cri.go:87] found id: "359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f"
	I0412 20:06:58.189036  262043 cri.go:87] found id: "be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454"
	I0412 20:06:58.189046  262043 cri.go:87] found id: "9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159"
	I0412 20:06:58.189062  262043 cri.go:87] found id: "7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50"
	I0412 20:06:58.189077  262043 cri.go:87] found id: "28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144"
	I0412 20:06:58.189091  262043 cri.go:87] found id: ""
	I0412 20:06:58.189105  262043 cri.go:232] Stopping containers: [902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b 663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e 6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a 359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454 9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159 7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50 28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144]
	I0412 20:06:58.189170  262043 ssh_runner.go:195] Run: which crictl
	I0412 20:06:58.192496  262043 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 902900f058f19c75879df7920ae1fe5c187eedf72398c8b16d122f6f045bc93b 663712be1e7cf421d3ad279c7e52a1827ee612dc04e50e046acd97b607610a9e 6741f3eed1f950fadac5b3bfa91947af6095899aca206fd94c670ab6f0a7847a 359eccc90aee595a6b67b52c56dfc92af2ca025088e4905056ca81c55c963d6f be25a1a0bb72db53d6c18b365f5ad018d89bb4cf7d5f9a2baf8d4240564b4454 9920f60dd74ddee8a369cd42569d4af3e1c3d0fc4879e75d4b7f55ca9cbfc159 7e06f4978c87749d49342b41b040b454aa3ec9fa86970708570f721a2a623b50 28053ce3f430b4c659c9f2bfffb00de41631d4e3ecbcfa9e4a1dcafbe76fd144
	I0412 20:06:58.221614  262043 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:06:58.233286  262043 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:06:58.241242  262043 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Apr 12 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Apr 12 20:05 /etc/kubernetes/scheduler.conf
	
	I0412 20:06:58.241317  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:06:58.248808  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:06:58.256355  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:06:58.263501  262043 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.263579  262043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:06:58.270559  262043 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:06:58.277975  262043 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:06:58.278046  262043 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:06:58.285321  262043 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:06:58.294292  262043 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:06:58.294326  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:58.338348  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:58.973712  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:59.119303  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:59.167022  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:06:59.220626  262043 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:06:59.220700  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:06:59.730082  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:00.229868  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:00.729838  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:01.230186  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:01.729607  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:02.230239  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:02.730366  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:01.616795  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:04.116517  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:02.877545  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:05.377276  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:04.102441  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:06.102522  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:03.230282  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:03.730277  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:04.229692  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:04.730521  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:05.230320  262043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:05.295798  262043 api_server.go:71] duration metric: took 6.075172654s to wait for apiserver process to appear ...
	I0412 20:07:05.295834  262043 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:07:05.295848  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:05.296366  262043 api_server.go:256] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection refused
	I0412 20:07:05.797121  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:08.128253  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:07:08.128295  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:07:08.296479  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:08.303675  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:07:08.303712  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:07:08.797254  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:08.802116  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:07:08.802146  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:07:09.296686  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:09.301615  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:07:09.301650  262043 api_server.go:102] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:07:09.797301  262043 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 20:07:09.803564  262043 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0412 20:07:09.811289  262043 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:07:09.811325  262043 api_server.go:130] duration metric: took 4.515484491s to wait for apiserver health ...
	I0412 20:07:09.811339  262043 cni.go:93] Creating CNI manager for ""
	I0412 20:07:09.811347  262043 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:07:06.116969  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:08.117491  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:07.876894  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:09.877235  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:08.601586  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:10.602554  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:09.814030  262043 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:07:09.814109  262043 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:07:09.818353  262043 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl ...
	I0412 20:07:09.818376  262043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:07:09.861903  262043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:07:11.048423  262043 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.186475348s)
	I0412 20:07:11.048465  262043 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:07:11.055790  262043 system_pods.go:59] 9 kube-system pods found
	I0412 20:07:11.055822  262043 system_pods.go:61] "coredns-64897985d-7fs64" [12c651ff-9508-4a46-9c6f-3bf20b59dfae] Running
	I0412 20:07:11.055830  262043 system_pods.go:61] "etcd-no-preload-20220412200453-42006" [bdfa6f43-91b7-40d0-9c3f-7684ad85c38e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:07:11.055838  262043 system_pods.go:61] "kindnet-rv4qh" [db399dcc-0c32-427a-b14a-9653948e580d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:07:11.055843  262043 system_pods.go:61] "kube-apiserver-no-preload-20220412200453-42006" [b1a9cfa6-973a-43f9-9bb4-c4db4b40367f] Running
	I0412 20:07:11.055850  262043 system_pods.go:61] "kube-controller-manager-no-preload-20220412200453-42006" [2cf35c18-75b1-4645-af1e-dbc8d5e55b73] Running
	I0412 20:07:11.055854  262043 system_pods.go:61] "kube-proxy-tctg4" [caa02c16-d30f-48d0-b131-20d3bab70353] Running
	I0412 20:07:11.055858  262043 system_pods.go:61] "kube-scheduler-no-preload-20220412200453-42006" [c3aec238-45e4-4049-876c-f271b9977d2a] Running
	I0412 20:07:11.055865  262043 system_pods.go:61] "metrics-server-b955d9d8-2chfs" [6327233c-6326-4459-b2e2-7ec9aa727186] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0412 20:07:11.055872  262043 system_pods.go:61] "storage-provisioner" [d44a5e95-5510-4f04-b075-c910ed6f1b80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0412 20:07:11.055877  262043 system_pods.go:74] duration metric: took 7.40521ms to wait for pod list to return data ...
	I0412 20:07:11.055885  262043 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:07:11.058382  262043 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:07:11.058412  262043 node_conditions.go:123] node cpu capacity is 8
	I0412 20:07:11.058424  262043 node_conditions.go:105] duration metric: took 2.527202ms to run NodePressure ...
	I0412 20:07:11.058442  262043 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:07:11.202271  262043 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:07:11.207435  262043 kubeadm.go:752] kubelet initialised
	I0412 20:07:11.207511  262043 kubeadm.go:753] duration metric: took 5.204568ms waiting for restarted kubelet to initialise ...
	I0412 20:07:11.207530  262043 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:07:11.231634  262043 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-7fs64" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:11.237543  262043 pod_ready.go:92] pod "coredns-64897985d-7fs64" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:11.237570  262043 pod_ready.go:81] duration metric: took 5.894944ms waiting for pod "coredns-64897985d-7fs64" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:11.237582  262043 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:10.617366  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:13.116482  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:12.377352  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:14.876965  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:13.100979  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:15.101733  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:17.101798  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:13.288105  262043 pod_ready.go:102] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:15.288923  262043 pod_ready.go:102] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:17.790592  262043 pod_ready.go:102] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:15.616860  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:17.616909  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:20.116362  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:17.377125  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:19.377352  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:21.877681  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:19.102034  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:21.601067  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:18.788512  262043 pod_ready.go:92] pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:18.788541  262043 pod_ready.go:81] duration metric: took 7.550951484s waiting for pod "etcd-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:18.788554  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:19.300834  262043 pod_ready.go:92] pod "kube-apiserver-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:19.300866  262043 pod_ready.go:81] duration metric: took 512.302546ms waiting for pod "kube-apiserver-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:19.300892  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.813288  262043 pod_ready.go:92] pod "kube-controller-manager-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:20.813330  262043 pod_ready.go:81] duration metric: took 1.512427511s waiting for pod "kube-controller-manager-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.813345  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tctg4" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.818389  262043 pod_ready.go:92] pod "kube-proxy-tctg4" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:20.818418  262043 pod_ready.go:81] duration metric: took 5.063428ms waiting for pod "kube-proxy-tctg4" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:20.818430  262043 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:22.828651  262043 pod_ready.go:102] pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:22.116777  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:24.117192  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:24.376731  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:26.377297  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:23.601712  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:26.101443  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:24.329319  262043 pod_ready.go:92] pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:24.329352  262043 pod_ready.go:81] duration metric: took 3.510912342s waiting for pod "kube-scheduler-no-preload-20220412200453-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:24.329367  262043 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:26.343964  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:26.616625  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:29.116436  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:28.377600  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:30.876826  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:28.102245  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:30.601637  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:28.843575  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:31.343620  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:31.615877  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:33.616194  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:32.876932  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:34.877535  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:33.101370  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:35.601954  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:33.344119  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:35.345714  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:37.843395  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:35.617001  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:38.116259  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:40.116468  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:37.377700  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:39.876618  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:41.877026  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:38.100797  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:40.101381  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:40.344173  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:42.843994  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:42.116642  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:44.116961  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:43.877127  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:45.877640  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:42.601782  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:45.101453  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:47.101535  242388 pod_ready.go:102] pod "coredns-64897985d-n8275" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:47.106719  242388 pod_ready.go:81] duration metric: took 4m0.017548884s waiting for pod "coredns-64897985d-n8275" in "kube-system" namespace to be "Ready" ...
	E0412 20:07:47.106749  242388 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0412 20:07:47.106760  242388 pod_ready.go:78] waiting up to 5m0s for pod "etcd-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.111547  242388 pod_ready.go:92] pod "etcd-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.111568  242388 pod_ready.go:81] duration metric: took 4.800194ms waiting for pod "etcd-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.111577  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.116360  242388 pod_ready.go:92] pod "kube-apiserver-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.116386  242388 pod_ready.go:81] duration metric: took 4.802187ms waiting for pod "kube-apiserver-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.116401  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.120884  242388 pod_ready.go:92] pod "kube-controller-manager-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.120904  242388 pod_ready.go:81] duration metric: took 4.495101ms waiting for pod "kube-controller-manager-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.120915  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-4ds2h" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:45.343597  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:47.343677  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:46.117375  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:48.616059  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:48.377421  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:50.876981  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:47.498845  242388 pod_ready.go:92] pod "kube-proxy-4ds2h" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.498874  242388 pod_ready.go:81] duration metric: took 377.951883ms waiting for pod "kube-proxy-4ds2h" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.498887  242388 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.898940  242388 pod_ready.go:92] pod "kube-scheduler-bridge-20220412195202-42006" in "kube-system" namespace has status "Ready":"True"
	I0412 20:07:47.898961  242388 pod_ready.go:81] duration metric: took 400.06795ms waiting for pod "kube-scheduler-bridge-20220412195202-42006" in "kube-system" namespace to be "Ready" ...
	I0412 20:07:47.898970  242388 pod_ready.go:38] duration metric: took 4m11.884749406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:07:47.898991  242388 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:07:47.899009  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0412 20:07:47.899050  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0412 20:07:47.926295  242388 cri.go:87] found id: "14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:47.926324  242388 cri.go:87] found id: ""
	I0412 20:07:47.926330  242388 logs.go:274] 1 containers: [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66]
	I0412 20:07:47.926372  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:47.929408  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0412 20:07:47.929469  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0412 20:07:47.953916  242388 cri.go:87] found id: "4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:47.953945  242388 cri.go:87] found id: ""
	I0412 20:07:47.953953  242388 logs.go:274] 1 containers: [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677]
	I0412 20:07:47.953996  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:47.957205  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0412 20:07:47.957265  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0412 20:07:47.982927  242388 cri.go:87] found id: "d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:47.982954  242388 cri.go:87] found id: ""
	I0412 20:07:47.982971  242388 logs.go:274] 1 containers: [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18]
	I0412 20:07:47.983015  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:47.986670  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0412 20:07:47.986733  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0412 20:07:48.013485  242388 cri.go:87] found id: "cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:48.013510  242388 cri.go:87] found id: ""
	I0412 20:07:48.013517  242388 logs.go:274] 1 containers: [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6]
	I0412 20:07:48.013560  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.016841  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0412 20:07:48.016907  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0412 20:07:48.042036  242388 cri.go:87] found id: "c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:48.042064  242388 cri.go:87] found id: ""
	I0412 20:07:48.042071  242388 logs.go:274] 1 containers: [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b]
	I0412 20:07:48.042114  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.045287  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0412 20:07:48.045346  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0412 20:07:48.072774  242388 cri.go:87] found id: ""
	I0412 20:07:48.072804  242388 logs.go:274] 0 containers: []
	W0412 20:07:48.072811  242388 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0412 20:07:48.072818  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0412 20:07:48.072884  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0412 20:07:48.101123  242388 cri.go:87] found id: "7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:48.101152  242388 cri.go:87] found id: ""
	I0412 20:07:48.101165  242388 logs.go:274] 1 containers: [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc]
	I0412 20:07:48.101210  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.104916  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0412 20:07:48.104978  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0412 20:07:48.131749  242388 cri.go:87] found id: "bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:48.131777  242388 cri.go:87] found id: ""
	I0412 20:07:48.131785  242388 logs.go:274] 1 containers: [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273]
	I0412 20:07:48.131844  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:48.135247  242388 logs.go:123] Gathering logs for describe nodes ...
	I0412 20:07:48.135275  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0412 20:07:48.220592  242388 logs.go:123] Gathering logs for etcd [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677] ...
	I0412 20:07:48.220630  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:48.254716  242388 logs.go:123] Gathering logs for coredns [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18] ...
	I0412 20:07:48.254755  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:48.283282  242388 logs.go:123] Gathering logs for storage-provisioner [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc] ...
	I0412 20:07:48.283320  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:48.313931  242388 logs.go:123] Gathering logs for kube-controller-manager [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273] ...
	I0412 20:07:48.313970  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:48.363501  242388 logs.go:123] Gathering logs for container status ...
	I0412 20:07:48.363547  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0412 20:07:48.397214  242388 logs.go:123] Gathering logs for kubelet ...
	I0412 20:07:48.397248  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0412 20:07:48.452454  242388 logs.go:123] Gathering logs for dmesg ...
	I0412 20:07:48.452497  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0412 20:07:48.482460  242388 logs.go:123] Gathering logs for kube-apiserver [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66] ...
	I0412 20:07:48.482499  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:48.515058  242388 logs.go:123] Gathering logs for kube-scheduler [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6] ...
	I0412 20:07:48.515095  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:48.551729  242388 logs.go:123] Gathering logs for kube-proxy [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b] ...
	I0412 20:07:48.551766  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:48.578067  242388 logs.go:123] Gathering logs for containerd ...
	I0412 20:07:48.578099  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0412 20:07:51.118753  242388 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:07:51.129713  242388 api_server.go:71] duration metric: took 4m15.249643945s to wait for apiserver process to appear ...
	I0412 20:07:51.129749  242388 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:07:51.129776  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0412 20:07:51.129847  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0412 20:07:51.155410  242388 cri.go:87] found id: "14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:51.155441  242388 cri.go:87] found id: ""
	I0412 20:07:51.155449  242388 logs.go:274] 1 containers: [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66]
	I0412 20:07:51.155507  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.158934  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0412 20:07:51.159025  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0412 20:07:51.184706  242388 cri.go:87] found id: "4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:51.184740  242388 cri.go:87] found id: ""
	I0412 20:07:51.184749  242388 logs.go:274] 1 containers: [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677]
	I0412 20:07:51.184802  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.188125  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0412 20:07:51.188227  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0412 20:07:51.216662  242388 cri.go:87] found id: "d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:51.216698  242388 cri.go:87] found id: ""
	I0412 20:07:51.216708  242388 logs.go:274] 1 containers: [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18]
	I0412 20:07:51.216776  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.220143  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0412 20:07:51.220208  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0412 20:07:51.245862  242388 cri.go:87] found id: "cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:51.245888  242388 cri.go:87] found id: ""
	I0412 20:07:51.245896  242388 logs.go:274] 1 containers: [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6]
	I0412 20:07:51.245936  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.249163  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0412 20:07:51.249217  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0412 20:07:51.274359  242388 cri.go:87] found id: "c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:51.274382  242388 cri.go:87] found id: ""
	I0412 20:07:51.274392  242388 logs.go:274] 1 containers: [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b]
	I0412 20:07:51.274434  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.277776  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0412 20:07:51.277848  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0412 20:07:51.304608  242388 cri.go:87] found id: ""
	I0412 20:07:51.304639  242388 logs.go:274] 0 containers: []
	W0412 20:07:51.304646  242388 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0412 20:07:51.304654  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0412 20:07:51.304713  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0412 20:07:51.331875  242388 cri.go:87] found id: "7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:51.331910  242388 cri.go:87] found id: ""
	I0412 20:07:51.331919  242388 logs.go:274] 1 containers: [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc]
	I0412 20:07:51.331968  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.335475  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0412 20:07:51.335533  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0412 20:07:51.364993  242388 cri.go:87] found id: "bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:51.365031  242388 cri.go:87] found id: ""
	I0412 20:07:51.365039  242388 logs.go:274] 1 containers: [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273]
	I0412 20:07:51.365086  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:51.368235  242388 logs.go:123] Gathering logs for kube-proxy [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b] ...
	I0412 20:07:51.368261  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:51.394766  242388 logs.go:123] Gathering logs for containerd ...
	I0412 20:07:51.394798  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0412 20:07:51.434414  242388 logs.go:123] Gathering logs for container status ...
	I0412 20:07:51.434459  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0412 20:07:51.467170  242388 logs.go:123] Gathering logs for kubelet ...
	I0412 20:07:51.467201  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0412 20:07:51.523890  242388 logs.go:123] Gathering logs for describe nodes ...
	I0412 20:07:51.523931  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0412 20:07:51.608825  242388 logs.go:123] Gathering logs for coredns [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18] ...
	I0412 20:07:51.608866  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:51.638923  242388 logs.go:123] Gathering logs for kube-scheduler [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6] ...
	I0412 20:07:51.638959  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:51.694709  242388 logs.go:123] Gathering logs for kube-controller-manager [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273] ...
	I0412 20:07:51.694754  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:51.735150  242388 logs.go:123] Gathering logs for dmesg ...
	I0412 20:07:51.735190  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0412 20:07:51.764872  242388 logs.go:123] Gathering logs for kube-apiserver [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66] ...
	I0412 20:07:51.764910  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:51.798060  242388 logs.go:123] Gathering logs for etcd [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677] ...
	I0412 20:07:51.798099  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:51.831196  242388 logs.go:123] Gathering logs for storage-provisioner [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc] ...
	I0412 20:07:51.831235  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:49.842863  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:51.843863  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:50.616244  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:52.616502  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:55.116309  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:52.877111  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:55.377129  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:54.360139  242388 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:07:54.365043  242388 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:07:54.365929  242388 api_server.go:140] control plane version: v1.23.5
	I0412 20:07:54.365952  242388 api_server.go:130] duration metric: took 3.236196704s to wait for apiserver health ...
	I0412 20:07:54.365961  242388 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:07:54.365980  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0412 20:07:54.366057  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0412 20:07:54.392193  242388 cri.go:87] found id: "14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:54.392229  242388 cri.go:87] found id: ""
	I0412 20:07:54.392238  242388 logs.go:274] 1 containers: [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66]
	I0412 20:07:54.392288  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.395591  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0412 20:07:54.395644  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0412 20:07:54.423066  242388 cri.go:87] found id: "4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:54.423101  242388 cri.go:87] found id: ""
	I0412 20:07:54.423109  242388 logs.go:274] 1 containers: [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677]
	I0412 20:07:54.423152  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.426420  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0412 20:07:54.426489  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0412 20:07:54.451827  242388 cri.go:87] found id: "d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:54.451857  242388 cri.go:87] found id: ""
	I0412 20:07:54.451865  242388 logs.go:274] 1 containers: [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18]
	I0412 20:07:54.451921  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.455198  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0412 20:07:54.455267  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0412 20:07:54.482696  242388 cri.go:87] found id: "cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:54.482727  242388 cri.go:87] found id: ""
	I0412 20:07:54.482738  242388 logs.go:274] 1 containers: [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6]
	I0412 20:07:54.482799  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.486206  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0412 20:07:54.486281  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0412 20:07:54.513262  242388 cri.go:87] found id: "c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:54.513289  242388 cri.go:87] found id: ""
	I0412 20:07:54.513296  242388 logs.go:274] 1 containers: [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b]
	I0412 20:07:54.513336  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.516728  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0412 20:07:54.516810  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0412 20:07:54.541344  242388 cri.go:87] found id: ""
	I0412 20:07:54.541369  242388 logs.go:274] 0 containers: []
	W0412 20:07:54.541376  242388 logs.go:276] No container was found matching "kubernetes-dashboard"
	I0412 20:07:54.541383  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0412 20:07:54.541444  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0412 20:07:54.567592  242388 cri.go:87] found id: "7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:54.567616  242388 cri.go:87] found id: ""
	I0412 20:07:54.567622  242388 logs.go:274] 1 containers: [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc]
	I0412 20:07:54.567676  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.570863  242388 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0412 20:07:54.570934  242388 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0412 20:07:54.597122  242388 cri.go:87] found id: "bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:54.597152  242388 cri.go:87] found id: ""
	I0412 20:07:54.597163  242388 logs.go:274] 1 containers: [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273]
	I0412 20:07:54.597214  242388 ssh_runner.go:195] Run: which crictl
	I0412 20:07:54.600606  242388 logs.go:123] Gathering logs for container status ...
	I0412 20:07:54.600635  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0412 20:07:54.630713  242388 logs.go:123] Gathering logs for kube-apiserver [14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66] ...
	I0412 20:07:54.630756  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14b9e14583de0fe8ee16440c2632ec6b373bd957fe60dff98bc7c5ac6e529a66"
	I0412 20:07:54.661857  242388 logs.go:123] Gathering logs for kube-scheduler [cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6] ...
	I0412 20:07:54.661892  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb51b1900e8eb1cc74d050ea5c14e8a975455db896ecffcd125e949d187757e6"
	I0412 20:07:54.696955  242388 logs.go:123] Gathering logs for kube-proxy [c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b] ...
	I0412 20:07:54.697002  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c894381c15be0db1ea25676c013d65df183f61faf81eb360ff971d07631c581b"
	I0412 20:07:54.725596  242388 logs.go:123] Gathering logs for storage-provisioner [7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc] ...
	I0412 20:07:54.725626  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f4eb82ce17bfbde42ae8987aac6d331a0d5ac45a795181983fd6887465383bc"
	I0412 20:07:54.751160  242388 logs.go:123] Gathering logs for kube-controller-manager [bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273] ...
	I0412 20:07:54.751192  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bf8873b5a0e902327042735cc9f938961c600e277e3f0afe20bbf36bb95d9273"
	I0412 20:07:54.788085  242388 logs.go:123] Gathering logs for containerd ...
	I0412 20:07:54.788125  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0412 20:07:54.828590  242388 logs.go:123] Gathering logs for kubelet ...
	I0412 20:07:54.828633  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0412 20:07:54.883562  242388 logs.go:123] Gathering logs for dmesg ...
	I0412 20:07:54.883616  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0412 20:07:54.914264  242388 logs.go:123] Gathering logs for describe nodes ...
	I0412 20:07:54.914318  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0412 20:07:54.995678  242388 logs.go:123] Gathering logs for etcd [4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677] ...
	I0412 20:07:54.995716  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4499cc1763b0967e7077cfe4e08910c5a572b73157cdbf56ab3e1e2b021b0677"
	I0412 20:07:55.028252  242388 logs.go:123] Gathering logs for coredns [d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18] ...
	I0412 20:07:55.028285  242388 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d328c5748827fbdbf41dcc9c6f12ed0e3247a0b507facf1d8c298b78a7e37c18"
	I0412 20:07:57.562998  242388 system_pods.go:59] 7 kube-system pods found
	I0412 20:07:57.563046  242388 system_pods.go:61] "coredns-64897985d-n8275" [6288c440-7286-4371-887b-05bdd2c3ae03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0412 20:07:57.563056  242388 system_pods.go:61] "etcd-bridge-20220412195202-42006" [5b8eb204-1b53-40ed-99a0-9ad66b992a11] Running
	I0412 20:07:57.563061  242388 system_pods.go:61] "kube-apiserver-bridge-20220412195202-42006" [b3d2ad41-c353-4f4d-adec-8eb4e415a3a9] Running
	I0412 20:07:57.563068  242388 system_pods.go:61] "kube-controller-manager-bridge-20220412195202-42006" [60642473-00d1-4412-9acc-f3fca32da8d1] Running
	I0412 20:07:57.563074  242388 system_pods.go:61] "kube-proxy-4ds2h" [b20999c9-8e7e-4489-b3d7-d07d890ff182] Running
	I0412 20:07:57.563082  242388 system_pods.go:61] "kube-scheduler-bridge-20220412195202-42006" [0b786d2f-ce5c-481f-af32-fb5574748ff4] Running
	I0412 20:07:57.563089  242388 system_pods.go:61] "storage-provisioner" [0d99066c-431e-4568-adf0-f4d550abb732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0412 20:07:57.563103  242388 system_pods.go:74] duration metric: took 3.197136349s to wait for pod list to return data ...
	I0412 20:07:57.563119  242388 default_sa.go:34] waiting for default service account to be created ...
	I0412 20:07:57.565734  242388 default_sa.go:45] found service account: "default"
	I0412 20:07:57.565758  242388 default_sa.go:55] duration metric: took 2.633322ms for default service account to be created ...
	I0412 20:07:57.565767  242388 system_pods.go:116] waiting for k8s-apps to be running ...
	I0412 20:07:57.570422  242388 system_pods.go:86] 7 kube-system pods found
	I0412 20:07:57.570457  242388 system_pods.go:89] "coredns-64897985d-n8275" [6288c440-7286-4371-887b-05bdd2c3ae03] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0412 20:07:57.570464  242388 system_pods.go:89] "etcd-bridge-20220412195202-42006" [5b8eb204-1b53-40ed-99a0-9ad66b992a11] Running
	I0412 20:07:57.570469  242388 system_pods.go:89] "kube-apiserver-bridge-20220412195202-42006" [b3d2ad41-c353-4f4d-adec-8eb4e415a3a9] Running
	I0412 20:07:57.570474  242388 system_pods.go:89] "kube-controller-manager-bridge-20220412195202-42006" [60642473-00d1-4412-9acc-f3fca32da8d1] Running
	I0412 20:07:57.570478  242388 system_pods.go:89] "kube-proxy-4ds2h" [b20999c9-8e7e-4489-b3d7-d07d890ff182] Running
	I0412 20:07:57.570483  242388 system_pods.go:89] "kube-scheduler-bridge-20220412195202-42006" [0b786d2f-ce5c-481f-af32-fb5574748ff4] Running
	I0412 20:07:57.570488  242388 system_pods.go:89] "storage-provisioner" [0d99066c-431e-4568-adf0-f4d550abb732] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0412 20:07:57.570494  242388 system_pods.go:126] duration metric: took 4.722384ms to wait for k8s-apps to be running ...
	I0412 20:07:57.570505  242388 system_svc.go:44] waiting for kubelet service to be running ....
	I0412 20:07:57.570548  242388 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:07:57.581395  242388 system_svc.go:56] duration metric: took 10.877477ms WaitForService to wait for kubelet.
	I0412 20:07:57.581432  242388 kubeadm.go:548] duration metric: took 4m21.701368513s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0412 20:07:57.581476  242388 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:07:57.584483  242388 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:07:57.584522  242388 node_conditions.go:123] node cpu capacity is 8
	I0412 20:07:57.584534  242388 node_conditions.go:105] duration metric: took 3.052647ms to run NodePressure ...
	I0412 20:07:57.584546  242388 start.go:213] waiting for startup goroutines ...
	I0412 20:07:57.623316  242388 start.go:499] kubectl: 1.23.5, cluster: 1.23.5 (minor skew: 0)
	I0412 20:07:57.625886  242388 out.go:176] * Done! kubectl is now configured to use "bridge-20220412195202-42006" cluster and "default" namespace by default
	I0412 20:07:53.843924  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:56.343760  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:07:57.616305  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:59.616970  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:07:57.876917  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:59.876969  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:07:58.843532  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:01.343266  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:02.115904  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:04.116462  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:02.377072  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:04.377447  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:06.377927  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:03.343687  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:05.344519  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:07.844120  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:06.616342  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:09.116011  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:08.876715  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:10.876992  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:10.343283  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:12.344262  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:11.116286  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:13.116651  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:12.877639  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:15.377865  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:14.844233  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:16.844349  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:15.616009  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:17.616863  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:20.116620  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:17.877000  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:19.877332  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:21.877545  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:19.344311  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:21.843388  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:22.116816  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:24.616268  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:24.377891  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:26.876684  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:23.844010  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:26.343650  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:27.116520  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:29.615844  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:28.877006  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:30.877641  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:28.343803  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:30.843896  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:31.617165  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:34.116149  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:33.377445  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:35.876596  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:33.342972  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:35.343470  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:37.345152  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:36.116632  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:38.616288  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:37.877405  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:40.377447  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:39.844005  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:42.344565  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:40.617049  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:42.617248  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:45.116711  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:42.876734  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:44.877017  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:44.843371  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:47.343783  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:47.616263  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:49.616386  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:47.376581  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:49.377052  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:51.377414  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:49.343917  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:51.344008  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:52.117238  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:54.616379  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:53.877648  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:56.376551  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:53.843110  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:55.844092  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:08:57.116572  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:59.616687  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:08:58.376693  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:00.377390  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:08:58.343120  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:00.843928  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:02.116215  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:04.616429  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:02.876643  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:04.877491  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:03.343475  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:05.344253  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:07.843997  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:06.616538  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:08.616760  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:07.376877  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:09.377403  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:11.876753  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:10.343170  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:12.844102  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:10.616938  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:13.116240  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:13.877655  248748 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:09:15.379867  248748 node_ready.go:38] duration metric: took 4m0.009449893s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:09:15.382455  248748 out.go:176] 
	W0412 20:09:15.382637  248748 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:09:15.382653  248748 out.go:241] * 
	W0412 20:09:15.383376  248748 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:09:15.343875  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:17.344345  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:15.616955  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:17.617162  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:20.116251  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:19.843578  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:21.843732  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:22.116450  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:24.616869  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:23.843902  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:26.343860  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:27.116619  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:29.616848  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:28.843031  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:31.343702  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:32.116785  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:34.617051  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:33.343736  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:35.344568  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:37.843550  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:37.116661  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:39.116703  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:40.343118  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:42.343850  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:41.116866  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:43.616290  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:44.844303  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:47.343638  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:45.617195  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:48.116600  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:49.842662  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:51.842754  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:50.617008  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:53.116012  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:55.116667  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:53.843510  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:56.344468  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:09:57.616022  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:59.617067  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:09:58.842943  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:10:00.843413  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:10:02.843570  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:10:02.116700  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:04.616203  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:07.116257  255510 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:10:07.618422  255510 node_ready.go:38] duration metric: took 4m0.009531174s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:10:07.620809  255510 out.go:176] 
	W0412 20:10:07.620921  255510 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:10:07.620935  255510 out.go:241] * 
	W0412 20:10:07.621615  255510 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:10:04.843662  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	I0412 20:10:07.343570  262043 pod_ready.go:102] pod "metrics-server-b955d9d8-2chfs" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	a3ab3b09e47d2       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   9316c5fd3c63b
	9477001e7ee3b       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   9316c5fd3c63b
	99c30d34ba676       3c53fa8541f95       4 minutes ago        Running             kube-proxy                0                   3cb029bb303fd
	1549b6cbd198c       b0c9e5e4dbb14       4 minutes ago        Running             kube-controller-manager   0                   9d0f79bb073ce
	3ecbbe2de190c       3fc1d62d65872       4 minutes ago        Running             kube-apiserver            0                   b911569574c06
	3bb4ed6826e04       25f8c7f3da61c       4 minutes ago        Running             etcd                      0                   c8ba1e6aa297c
	e67989f440e43       884d49d6d8c9f       4 minutes ago        Running             kube-scheduler            0                   cae06935f0abb
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:05:24 UTC, end at Tue 2022-04-12 20:10:08 UTC. --
	Apr 12 20:05:47 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:05:47.985544440Z" level=info msg="StartContainer for \"3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed\" returns successfully"
	Apr 12 20:05:47 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:05:47.992891518Z" level=info msg="StartContainer for \"1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d\" returns successfully"
	Apr 12 20:06:06 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:06.022613606Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.155437942Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-6nznr,Uid:aa45eb74-fde3-453a-82ad-e29ae4116d51,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.155438052Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-7f7sj,Uid:059bb69b-b8de-4f71-85b1-8d7391491598,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.179565593Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84 pid=1694
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.179784838Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cb029bb303fd3b8c35ae5e29826f1ee17f4f5fbc34b221da23f2188cf5f11df pid=1695
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.244258008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6nznr,Uid:aa45eb74-fde3-453a-82ad-e29ae4116d51,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cb029bb303fd3b8c35ae5e29826f1ee17f4f5fbc34b221da23f2188cf5f11df\""
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.246791687Z" level=info msg="CreateContainer within sandbox \"3cb029bb303fd3b8c35ae5e29826f1ee17f4f5fbc34b221da23f2188cf5f11df\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.262324363Z" level=info msg="CreateContainer within sandbox \"3cb029bb303fd3b8c35ae5e29826f1ee17f4f5fbc34b221da23f2188cf5f11df\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9\""
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.262985191Z" level=info msg="StartContainer for \"99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9\""
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.343365123Z" level=info msg="StartContainer for \"99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9\" returns successfully"
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.488787451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-7f7sj,Uid:059bb69b-b8de-4f71-85b1-8d7391491598,Namespace:kube-system,Attempt:0,} returns sandbox id \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\""
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.492791644Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.520460426Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\""
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.521147412Z" level=info msg="StartContainer for \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\""
	Apr 12 20:06:07 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:06:07.807206879Z" level=info msg="StartContainer for \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\" returns successfully"
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.122673242Z" level=info msg="shim disconnected" id=9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.122743841Z" level=warning msg="cleaning up after shim disconnected" id=9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba namespace=k8s.io
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.122757768Z" level=info msg="cleaning up dead shim"
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.134261281Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:08:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2063\n"
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.786632712Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.803163398Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\""
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.803735642Z" level=info msg="StartContainer for \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\""
	Apr 12 20:08:48 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:08:48.984300676Z" level=info msg="StartContainer for \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220412200510-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220412200510-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=embed-certs-20220412200510-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_05_55_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:05:50 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220412200510-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:10:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:06:06 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:06:06 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:06:06 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:06:06 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220412200510-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ce1f241f-9ecd-4653-8279-4a97e0fb4c59
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220412200510-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-7f7sj                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220412200510-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-embed-certs-20220412200510-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-proxy-6nznr                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220412200510-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m1s                   kube-proxy  
	  Normal  Starting                 4m22s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m21s (x5 over 4m21s)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x4 over 4m21s)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x4 over 4m21s)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959906] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007868] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023887] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.967869] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.035859] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019937] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[ +10.851791] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[Apr12 20:10] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023965] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.951861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.015883] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019932] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed] <==
	* {"level":"info","ts":"2022-04-12T20:05:48.080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-04-12T20:05:48.080Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220412200510-42006 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:05:48.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:05:48.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:10:08 up  2:52,  0 users,  load average: 0.87, 1.40, 1.79
	Linux embed-certs-20220412200510-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930] <==
	* I0412 20:05:51.079090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 20:05:51.079168       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 20:05:51.079317       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 20:05:51.079334       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:05:51.081403       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 20:05:51.951431       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:05:51.956780       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 20:05:51.958625       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:05:51.960721       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:05:51.960740       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 20:05:52.453396       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:05:52.492042       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 20:05:52.622773       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 20:05:52.627636       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0412 20:05:52.628832       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:05:52.632992       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:05:52.692975       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:05:53.108187       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:05:54.258431       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:05:54.266902       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:05:54.281209       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:06:06.703041       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:06:06.802578       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:06:06.802578       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:06:07.429868       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d] <==
	* I0412 20:06:05.965796       1 range_allocator.go:374] Set node embed-certs-20220412200510-42006 PodCIDR to [10.244.0.0/24]
	I0412 20:06:05.965962       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0412 20:06:05.988586       1 shared_informer.go:247] Caches are synced for taint 
	I0412 20:06:05.988690       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0412 20:06:05.988706       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0412 20:06:05.988857       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220412200510-42006. Assuming now as a timestamp.
	I0412 20:06:05.988920       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0412 20:06:05.988871       1 event.go:294] "Event occurred" object="embed-certs-20220412200510-42006" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220412200510-42006 event: Registered Node embed-certs-20220412200510-42006 in Controller"
	I0412 20:06:06.049681       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0412 20:06:06.072407       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0412 20:06:06.100997       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0412 20:06:06.117589       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 20:06:06.117622       1 disruption.go:371] Sending events to api server.
	I0412 20:06:06.155080       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:06:06.158368       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:06:06.555369       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:06:06.555404       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:06:06.586454       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:06:06.705486       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 20:06:06.809151       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6nznr"
	I0412 20:06:06.809239       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7f7sj"
	I0412 20:06:06.951974       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 20:06:06.955212       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gnw47"
	I0412 20:06:06.962832       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-zvglg"
	I0412 20:06:06.997626       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-gnw47"
	
	* 
	* ==> kube-proxy [99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9] <==
	* I0412 20:06:07.392554       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0412 20:06:07.392628       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0412 20:06:07.392660       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:06:07.419205       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:06:07.419245       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:06:07.419257       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:06:07.419297       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:06:07.419807       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:06:07.422063       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:06:07.422089       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:06:07.422928       1 config.go:317] "Starting service config controller"
	I0412 20:06:07.422945       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:06:07.524186       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:06:07.524314       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0] <==
	* E0412 20:05:51.099919       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 20:05:51.099933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:05:51.099991       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.099995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:05:51.100017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.100045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:05:51.928224       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 20:05:51.928267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0412 20:05:51.928229       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.928294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.981180       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.981262       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.982338       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0412 20:05:51.982383       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0412 20:05:52.070012       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:52.070085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:52.082539       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:05:52.082581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:05:52.109222       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:05:52.109254       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:05:52.121424       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:05:52.121458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0412 20:05:52.211687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:05:52.211733       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0412 20:05:54.188758       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:05:24 UTC, end at Tue 2022-04-12 20:10:09 UTC. --
	Apr 12 20:08:09 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:09.649742    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:14 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:14.650478    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:19 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:19.651842    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:24 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:24.652755    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:29 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:29.654422    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:34 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:34.657909    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:39 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:39.659074    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:44 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:44.660392    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:48 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:08:48.784524    1305 scope.go:110] "RemoveContainer" containerID="9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba"
	Apr 12 20:08:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:49.662033    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:54 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:54.663310    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:08:59 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:08:59.664417    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:04 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:04.665514    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:09 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:09.666883    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:14 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:14.667940    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:19 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:19.668925    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:24 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:24.670375    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:29 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:29.671851    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:34 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:34.672780    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:39 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:39.674025    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:44 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:44.675037    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:49.676399    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:54 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:54.677550    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:09:59 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:09:59.678564    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:10:04 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:10:04.680406    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-zvglg storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-zvglg storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-zvglg storage-provisioner: exit status 1 (50.750945ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-zvglg" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-zvglg storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (299.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (281.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:08:14.514935   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.141513288s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:08:31.519777   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:31.525088   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:31.535428   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:31.555751   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:31.596209   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:31.676412   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:31.836817   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:32.157268   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:32.798208   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:34.078445   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:36.639412   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.147997936s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:08:41.760649   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:08:52.001342   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.12601776s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:08:57.856725   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:09:12.481570   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.137686281s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135035002s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.128427566s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:09:53.442715   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.13522163s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:10:31.558947   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.146226643s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0412 20:10:54.807307   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:10:58.260129   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
E0412 20:11:15.363587   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.135566572s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125612548s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20220412195202-42006 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.142788542s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:168: failed to do nslookup on kubernetes.default: exit status 1
net_test.go:173: failed nslookup: got=";; connection timed out; no servers could be reached\n\n\n", want=*"10.96.0.1"*
--- FAIL: TestNetworkPlugins/group/bridge/DNS (281.41s)
E0412 20:15:31.559092   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:15:42.223049   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:15:54.807745   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:15:58.260557   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:16:07.734987   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:07.740296   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:07.750554   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:07.770853   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:07.811151   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:07.891528   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:08.051695   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:08.372570   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:09.013120   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:10.294028   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:12.855098   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:17.975740   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:28.216334   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:48.697135   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:16:54.602714   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:17:10.366687   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (484.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [ec310bb2-f359-41af-bce6-60536a6bd0a9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:180: ***** TestStartStop/group/old-k8s-version/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
start_stop_delete_test.go:180: TestStartStop/group/old-k8s-version/serial/DeployApp: showing logs for failed pods as of 2022-04-12 20:17:18.072916326 +0000 UTC m=+3417.049214747
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe po busybox -n default
start_stop_delete_test.go:180: (dbg) kubectl --context old-k8s-version-20220412200421-42006 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-b5lb8 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
default-token-b5lb8:
Type:        Secret (a volume populated by a Secret)
SecretName:  default-token-b5lb8
Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                    From               Message
----     ------            ----                   ----               -------
Warning  FailedScheduling  8m                     default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning  FailedScheduling  5m24s (x1 over 6m54s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 logs busybox -n default
start_stop_delete_test.go:180: (dbg) kubectl --context old-k8s-version-20220412200421-42006 logs busybox -n default:
start_stop_delete_test.go:180: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220412200421-42006
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220412200421-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42",
	        "Created": "2022-04-12T20:04:30.270409412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249540,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:04:30.654643592Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42-json.log",
	        "Name": "/old-k8s-version-20220412200421-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220412200421-42006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220412200421-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220412200421-42006",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220412200421-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220412200421-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ad84289742f0dfbd44646dfe51c90a2743ffb78bf6626291683c05a3d95eee0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49392"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49389"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ad84289742f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220412200421-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a5e4ff2bbf6e",
	                        "old-k8s-version-20220412200421-42006"
	                    ],
	                    "NetworkID": "0b96a6a249d72d5fff5d5b9db029edbfc6a07a56e8064108c65000591927cbc6",
	                    "EndpointID": "c3007d28c5878ca69ad88197e01438f31f4f4f7d8152c555a927532e6a59c8f3",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25: (1.073664473s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:17 UTC | Tue, 12 Apr 2022 20:06:37 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                            |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:37 UTC | Tue, 12 Apr 2022 20:06:38 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                            |         |         |                               |                               |
	| start   | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:42 UTC | Tue, 12 Apr 2022 20:07:57 UTC |
	|         | --memory=2048                                              |                                            |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                            |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                            |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                               |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                            |         |         |                               |                               |
	| ssh     | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:07:57 UTC | Tue, 12 Apr 2022 20:07:58 UTC |
	|         | pgrep -a kubelet                                           |                                            |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:09:15 UTC | Tue, 12 Apr 2022 20:09:16 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006           | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:10:08 UTC | Tue, 12 Apr 2022 20:10:09 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:38 UTC | Tue, 12 Apr 2022 20:12:02 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                            |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                            |         |         |                               |                               |
	|         | --driver=docker                                            |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:20 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                            |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:21 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:22 UTC | Tue, 12 Apr 2022 20:12:23 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:24 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                            |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                            |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                            |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                            |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                            |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                            |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                            |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                            |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                            |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                            |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                            |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                            |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                            |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                            |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:14:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:14:08.832397  282203 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:14:08.832526  282203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:14:08.832537  282203 out.go:310] Setting ErrFile to fd 2...
	I0412 20:14:08.832541  282203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:14:08.832644  282203 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:14:08.832908  282203 out.go:304] Setting JSON to false
	I0412 20:14:08.834493  282203 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10602,"bootTime":1649783847,"procs":547,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:14:08.834611  282203 start.go:125] virtualization: kvm guest
	I0412 20:14:08.837207  282203 out.go:176] * [newest-cni-20220412201253-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:14:08.838808  282203 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:14:08.837440  282203 notify.go:193] Checking for updates...
	I0412 20:14:08.840190  282203 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:14:08.841789  282203 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:08.843251  282203 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:14:08.844774  282203 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:14:08.845319  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:08.845793  282203 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:14:08.892101  282203 docker.go:137] docker version: linux-20.10.14
	I0412 20:14:08.892248  282203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:14:08.993547  282203 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:14:08.923798845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:14:08.993679  282203 docker.go:254] overlay module found
	I0412 20:14:08.996175  282203 out.go:176] * Using the docker driver based on existing profile
	I0412 20:14:08.996210  282203 start.go:284] selected driver: docker
	I0412 20:14:08.996217  282203 start.go:801] validating driver "docker" against &{Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Met
ricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:08.996338  282203 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:14:08.996376  282203 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:14:08.996397  282203 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:14:08.998211  282203 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:14:08.998861  282203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:14:09.094596  282203 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:14:09.030624528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:14:09.094806  282203 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:14:09.094836  282203 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:14:09.096887  282203 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:14:09.097012  282203 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0412 20:14:09.097039  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:09.097046  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:09.097054  282203 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:14:09.097062  282203 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:14:09.097069  282203 start_flags.go:306] config:
	{Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false d
efault_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:09.099506  282203 out.go:176] * Starting control plane node newest-cni-20220412201253-42006 in cluster newest-cni-20220412201253-42006
	I0412 20:14:09.099556  282203 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:14:09.101249  282203 out.go:176] * Pulling base image ...
	I0412 20:14:09.101287  282203 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:14:09.101322  282203 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:14:09.101342  282203 cache.go:57] Caching tarball of preloaded images
	I0412 20:14:09.101401  282203 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:14:09.101566  282203 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:14:09.101582  282203 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on containerd
	I0412 20:14:09.101721  282203 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/config.json ...
	I0412 20:14:09.147707  282203 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:14:09.147734  282203 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:14:09.147748  282203 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:14:09.147784  282203 start.go:352] acquiring machines lock for newest-cni-20220412201253-42006: {Name:mk0dccf8a2654d003d8787479cf4abb87e60a916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:14:09.147896  282203 start.go:356] acquired machines lock for "newest-cni-20220412201253-42006" in 84.854µs
	I0412 20:14:09.147923  282203 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:14:09.147932  282203 fix.go:55] fixHost starting: 
	I0412 20:14:09.148209  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:09.182695  282203 fix.go:103] recreateIfNeeded on newest-cni-20220412201253-42006: state=Stopped err=<nil>
	W0412 20:14:09.182743  282203 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:14:09.128201  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:11.627831  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:09.185311  282203 out.go:176] * Restarting existing docker container for "newest-cni-20220412201253-42006" ...
	I0412 20:14:09.185403  282203 cli_runner.go:164] Run: docker start newest-cni-20220412201253-42006
	I0412 20:14:09.582922  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:09.620698  282203 kic.go:416] container "newest-cni-20220412201253-42006" state is running.
	I0412 20:14:09.621213  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:09.657122  282203 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/config.json ...
	I0412 20:14:09.657367  282203 machine.go:88] provisioning docker machine ...
	I0412 20:14:09.657398  282203 ubuntu.go:169] provisioning hostname "newest-cni-20220412201253-42006"
	I0412 20:14:09.657457  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:09.694424  282203 main.go:134] libmachine: Using SSH client type: native
	I0412 20:14:09.694593  282203 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0412 20:14:09.694609  282203 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220412201253-42006 && echo "newest-cni-20220412201253-42006" | sudo tee /etc/hostname
	I0412 20:14:09.695270  282203 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55074->127.0.0.1:49422: read: connection reset by peer
	I0412 20:14:12.826188  282203 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220412201253-42006
	
	I0412 20:14:12.826283  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:12.860717  282203 main.go:134] libmachine: Using SSH client type: native
	I0412 20:14:12.860887  282203 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0412 20:14:12.860908  282203 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220412201253-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220412201253-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220412201253-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:14:12.984427  282203 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:14:12.984458  282203 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:14:12.984485  282203 ubuntu.go:177] setting up certificates
	I0412 20:14:12.984495  282203 provision.go:83] configureAuth start
	I0412 20:14:12.984546  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:13.022286  282203 provision.go:138] copyHostCerts
	I0412 20:14:13.022359  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:14:13.022434  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:14:13.022507  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:14:13.022629  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:14:13.022645  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:14:13.022670  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:14:13.022733  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:14:13.022741  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:14:13.022761  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:14:13.022827  282203 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220412201253-42006 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220412201253-42006]
	I0412 20:14:13.147393  282203 provision.go:172] copyRemoteCerts
	I0412 20:14:13.147461  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:14:13.147499  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.182738  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.271719  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:14:13.291955  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0412 20:14:13.311640  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:14:13.330587  282203 provision.go:86] duration metric: configureAuth took 346.079902ms
	I0412 20:14:13.330615  282203 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:14:13.330805  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:13.330817  282203 machine.go:91] provisioned docker machine in 3.673434359s
	I0412 20:14:13.330823  282203 start.go:306] post-start starting for "newest-cni-20220412201253-42006" (driver="docker")
	I0412 20:14:13.330829  282203 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:14:13.330883  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:14:13.330918  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.365737  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.460195  282203 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:14:13.463475  282203 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:14:13.463524  282203 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:14:13.463538  282203 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:14:13.463544  282203 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:14:13.463556  282203 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:14:13.463617  282203 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:14:13.463682  282203 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:14:13.463765  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:14:13.471624  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:14:13.491654  282203 start.go:309] post-start completed in 160.815375ms
	I0412 20:14:13.491734  282203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:14:13.491791  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.529484  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.616940  282203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:14:13.621059  282203 fix.go:57] fixHost completed within 4.473117291s
	I0412 20:14:13.621091  282203 start.go:81] releasing machines lock for "newest-cni-20220412201253-42006", held for 4.473181182s
	I0412 20:14:13.621178  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:13.655978  282203 ssh_runner.go:195] Run: systemctl --version
	I0412 20:14:13.656014  282203 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:14:13.656038  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.656108  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.692203  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.693258  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.795984  282203 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:14:13.808689  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:14:13.820011  282203 docker.go:183] disabling docker service ...
	I0412 20:14:13.820092  282203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:14:13.830551  282203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:14:14.127986  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:16.627569  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:13.840509  282203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:14:13.920197  282203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:14:13.996299  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:14:14.006773  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:14:14.020629  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:14:14.035412  282203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:14:14.042432  282203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:14:14.049388  282203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:14:14.128037  282203 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:14:14.201778  282203 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:14:14.201900  282203 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:14:14.206190  282203 start.go:462] Will wait 60s for crictl version
	I0412 20:14:14.206249  282203 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:14:14.233021  282203 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:14:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:14:19.127780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:21.627899  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:25.280259  282203 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:14:25.305913  282203 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:14:25.305972  282203 ssh_runner.go:195] Run: containerd --version
	I0412 20:14:25.329153  282203 ssh_runner.go:195] Run: containerd --version
	I0412 20:14:25.353837  282203 out.go:176] * Preparing Kubernetes v1.23.6-rc.0 on containerd 1.5.10 ...
	I0412 20:14:25.353941  282203 cli_runner.go:164] Run: docker network inspect newest-cni-20220412201253-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:14:25.390025  282203 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0412 20:14:25.393752  282203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:14:25.406736  282203 out.go:176]   - kubelet.network-plugin=cni
	I0412 20:14:25.408682  282203 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0412 20:14:24.127325  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:26.127416  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.127721  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:25.410319  282203 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:14:25.410383  282203 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:14:25.410438  282203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:14:25.435000  282203 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:14:25.435025  282203 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:14:25.435069  282203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:14:25.460785  282203 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:14:25.460815  282203 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:14:25.460865  282203 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:14:25.486553  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:25.486581  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:25.486596  282203 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0412 20:14:25.486612  282203 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220412201253-42006 NodeName:newest-cni-20220412201253-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leade
r-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:14:25.486771  282203 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220412201253-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:14:25.486858  282203 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220412201253-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:14:25.486911  282203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6-rc.0
	I0412 20:14:25.495243  282203 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:14:25.495328  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:14:25.502983  282203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (618 bytes)
	I0412 20:14:25.516969  282203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0412 20:14:25.530231  282203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2201 bytes)
	I0412 20:14:25.544174  282203 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:14:25.547463  282203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:14:25.557235  282203 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006 for IP: 192.168.76.2
	I0412 20:14:25.557346  282203 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:14:25.557383  282203 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:14:25.557447  282203 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/client.key
	I0412 20:14:25.557553  282203 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.key.31bdca25
	I0412 20:14:25.557606  282203 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.key
	I0412 20:14:25.557698  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:14:25.557730  282203 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:14:25.557745  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:14:25.557768  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:14:25.557791  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:14:25.557819  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:14:25.557861  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:14:25.558574  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:14:25.577575  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0412 20:14:25.597461  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:14:25.617831  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:14:25.637035  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:14:25.655577  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:14:25.673593  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:14:25.693796  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:14:25.713653  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:14:25.732646  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:14:25.751515  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:14:25.770576  282203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:14:25.784726  282203 ssh_runner.go:195] Run: openssl version
	I0412 20:14:25.790079  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:14:25.799378  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.802945  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.803028  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.808734  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:14:25.816535  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:14:25.825325  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.828750  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.828803  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.834167  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:14:25.841792  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:14:25.850010  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.853624  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.853701  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.859058  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:14:25.866757  282203 kubeadm.go:391] StartCluster: {Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.doma
in] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:25.866859  282203 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:14:25.866908  282203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:14:25.894574  282203 cri.go:87] found id: "d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67"
	I0412 20:14:25.894601  282203 cri.go:87] found id: "1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9"
	I0412 20:14:25.894608  282203 cri.go:87] found id: "0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd"
	I0412 20:14:25.894614  282203 cri.go:87] found id: "a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3"
	I0412 20:14:25.894619  282203 cri.go:87] found id: "86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356"
	I0412 20:14:25.894631  282203 cri.go:87] found id: "7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d"
	I0412 20:14:25.894637  282203 cri.go:87] found id: ""
	I0412 20:14:25.894696  282203 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:14:25.909659  282203 cri.go:114] JSON = null
	W0412 20:14:25.909724  282203 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:14:25.909774  282203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:14:25.917474  282203 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:14:25.917508  282203 kubeadm.go:601] restartCluster start
	I0412 20:14:25.917553  282203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:14:25.925481  282203 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:25.926482  282203 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220412201253-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:25.927149  282203 kubeconfig.go:127] "newest-cni-20220412201253-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:14:25.928050  282203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:25.929973  282203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:14:25.937574  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:25.937643  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:25.946692  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.147196  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.147313  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.157070  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.347407  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.347480  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.356517  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.547770  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.547871  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.557039  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.747366  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.747450  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.757308  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.947424  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.947524  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.956488  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.147733  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.147821  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.156974  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.347245  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.347355  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.356556  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.547767  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.547845  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.557055  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.747315  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.747407  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.756437  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.947668  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.947755  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.956980  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.147211  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.147335  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.156358  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.347634  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.347710  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.356777  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.546978  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.547079  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.555852  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.746989  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.747054  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.755735  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:30.627141  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:32.627877  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.947273  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.947359  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.956917  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.956943  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.956997  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.965673  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.965703  282203 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:14:28.965712  282203 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:14:28.965726  282203 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:14:28.965780  282203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:14:28.994340  282203 cri.go:87] found id: "d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67"
	I0412 20:14:28.994369  282203 cri.go:87] found id: "1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9"
	I0412 20:14:28.994378  282203 cri.go:87] found id: "0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd"
	I0412 20:14:28.994392  282203 cri.go:87] found id: "a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3"
	I0412 20:14:28.994401  282203 cri.go:87] found id: "86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356"
	I0412 20:14:28.994410  282203 cri.go:87] found id: "7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d"
	I0412 20:14:28.994419  282203 cri.go:87] found id: ""
	I0412 20:14:28.994431  282203 cri.go:232] Stopping containers: [d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67 1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9 0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3 86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356 7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d]
	I0412 20:14:28.994486  282203 ssh_runner.go:195] Run: which crictl
	I0412 20:14:28.997755  282203 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67 1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9 0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3 86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356 7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d
	I0412 20:14:29.026024  282203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:14:29.037162  282203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:14:29.044772  282203 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Apr 12 20:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 12 20:13 /etc/kubernetes/scheduler.conf
	
	I0412 20:14:29.044835  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:14:29.052237  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:14:29.059409  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:14:29.066564  282203 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:29.066629  282203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:14:29.073927  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:14:29.081806  282203 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:29.081873  282203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:14:29.089097  282203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:14:29.097286  282203 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:14:29.097318  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.143554  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.837517  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.985443  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:30.038605  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:30.112525  282203 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:14:30.112599  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:30.622626  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:31.122421  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:31.622412  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:32.122000  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:32.622749  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:33.122311  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:33.622220  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.128008  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:37.628055  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:34.122750  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:34.622370  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.122375  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.622023  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:36.122611  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:36.192499  282203 api_server.go:71] duration metric: took 6.079970753s to wait for apiserver process to appear ...
	I0412 20:14:36.192531  282203 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:14:36.192547  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:36.192951  282203 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0412 20:14:36.693238  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.081785  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:14:39.081830  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:14:39.193101  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.198543  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:39.198577  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:39.693125  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.698513  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:39.698546  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:40.194142  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:40.199360  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:40.199402  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:40.693984  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:40.698538  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:14:40.704599  282203 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:14:40.704627  282203 api_server.go:130] duration metric: took 4.512088959s to wait for apiserver health ...
	I0412 20:14:40.704637  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:40.704648  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:40.707243  282203 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:14:40.707307  282203 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:14:40.711258  282203 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl ...
	I0412 20:14:40.711285  282203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:14:40.725079  282203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:14:41.409231  282203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:14:41.417820  282203 system_pods.go:59] 9 kube-system pods found
	I0412 20:14:41.417861  282203 system_pods.go:61] "coredns-64897985d-4bvbc" [fb9e8493-9c0d-4e05-b53a-1749537e5040] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417873  282203 system_pods.go:61] "etcd-newest-cni-20220412201253-42006" [3aad179e-c3c7-4666-a6d3-d255640590a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:14:41.417889  282203 system_pods.go:61] "kindnet-n5jt7" [a91f07c6-2b78-4581-b9ac-f3a3c3626dd8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:14:41.417894  282203 system_pods.go:61] "kube-apiserver-newest-cni-20220412201253-42006" [2d4d9c73-5232-4a9c-99fb-7b9006cf532b] Running
	I0412 20:14:41.417903  282203 system_pods.go:61] "kube-controller-manager-newest-cni-20220412201253-42006" [ddacb408-0fe4-4726-b426-a84e7d23a1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:14:41.417913  282203 system_pods.go:61] "kube-proxy-jp96c" [3b9c939e-cafa-4614-a930-02dbf11e941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:14:41.417920  282203 system_pods.go:61] "kube-scheduler-newest-cni-20220412201253-42006" [7cc7f50d-6fe0-405a-9438-00b84708bcdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:14:41.417932  282203 system_pods.go:61] "metrics-server-b955d9d8-99nk4" [68d97c36-9d61-4926-bd17-e63396989cc8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417938  282203 system_pods.go:61] "storage-provisioner" [43ce4397-4b28-450b-b967-f8f2b597585c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417944  282203 system_pods.go:74] duration metric: took 8.691981ms to wait for pod list to return data ...
	I0412 20:14:41.417956  282203 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:14:41.421510  282203 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:14:41.421537  282203 node_conditions.go:123] node cpu capacity is 8
	I0412 20:14:41.421549  282203 node_conditions.go:105] duration metric: took 3.589136ms to run NodePressure ...
	I0412 20:14:41.421570  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:41.576233  282203 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:14:41.583862  282203 ops.go:34] apiserver oom_adj: -16
	I0412 20:14:41.583887  282203 kubeadm.go:605] restartCluster took 15.666373103s
	I0412 20:14:41.583897  282203 kubeadm.go:393] StartCluster complete in 15.717149501s
	I0412 20:14:41.583915  282203 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:41.584019  282203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:41.586119  282203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:41.591379  282203 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220412201253-42006" rescaled to 1
	I0412 20:14:41.591451  282203 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:14:41.593719  282203 out.go:176] * Verifying Kubernetes components...
	I0412 20:14:41.591533  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:14:41.591554  282203 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0412 20:14:41.591660  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:41.593837  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:14:41.593881  282203 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593907  282203 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.593912  282203 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:14:41.593947  282203 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593971  282203 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593979  282203 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220412201253-42006"
	I0412 20:14:41.593992  282203 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.594005  282203 addons.go:165] addon metrics-server should already be in state true
	I0412 20:14:41.593995  282203 addons.go:65] Setting dashboard=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.594043  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.593973  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.594045  282203 addons.go:153] Setting addon dashboard=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.594280  282203 addons.go:165] addon dashboard should already be in state true
	I0412 20:14:41.594328  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.594334  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594502  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594639  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594799  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.645907  282203 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:14:41.648341  282203 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:14:41.650175  282203 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:14:41.651645  282203 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:14:41.648424  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:14:41.651681  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:14:41.650260  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:14:41.651782  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:14:41.651798  282203 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:14:41.651811  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:14:41.651751  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.651850  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.651850  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.667707  282203 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.667739  282203 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:14:41.667770  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.668264  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.679431  282203 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0412 20:14:41.679495  282203 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:14:41.679542  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:41.692016  282203 api_server.go:71] duration metric: took 100.509345ms to wait for apiserver process to appear ...
	I0412 20:14:41.692053  282203 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:14:41.692097  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:41.698410  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:14:41.699444  282203 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:14:41.699470  282203 api_server.go:130] duration metric: took 7.409196ms to wait for apiserver health ...
	I0412 20:14:41.699481  282203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:14:41.701111  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.706303  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.706406  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.707318  282203 system_pods.go:59] 9 kube-system pods found
	I0412 20:14:41.707353  282203 system_pods.go:61] "coredns-64897985d-4bvbc" [fb9e8493-9c0d-4e05-b53a-1749537e5040] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707367  282203 system_pods.go:61] "etcd-newest-cni-20220412201253-42006" [3aad179e-c3c7-4666-a6d3-d255640590a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:14:41.707377  282203 system_pods.go:61] "kindnet-n5jt7" [a91f07c6-2b78-4581-b9ac-f3a3c3626dd8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:14:41.707389  282203 system_pods.go:61] "kube-apiserver-newest-cni-20220412201253-42006" [2d4d9c73-5232-4a9c-99fb-7b9006cf532b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:14:41.707406  282203 system_pods.go:61] "kube-controller-manager-newest-cni-20220412201253-42006" [ddacb408-0fe4-4726-b426-a84e7d23a1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:14:41.707419  282203 system_pods.go:61] "kube-proxy-jp96c" [3b9c939e-cafa-4614-a930-02dbf11e941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:14:41.707429  282203 system_pods.go:61] "kube-scheduler-newest-cni-20220412201253-42006" [7cc7f50d-6fe0-405a-9438-00b84708bcdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:14:41.707446  282203 system_pods.go:61] "metrics-server-b955d9d8-99nk4" [68d97c36-9d61-4926-bd17-e63396989cc8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707460  282203 system_pods.go:61] "storage-provisioner" [43ce4397-4b28-450b-b967-f8f2b597585c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707470  282203 system_pods.go:74] duration metric: took 7.981821ms to wait for pod list to return data ...
	I0412 20:14:41.707485  282203 default_sa.go:34] waiting for default service account to be created ...
	I0412 20:14:41.710431  282203 default_sa.go:45] found service account: "default"
	I0412 20:14:41.710468  282203 default_sa.go:55] duration metric: took 2.960657ms for default service account to be created ...
	I0412 20:14:41.710484  282203 kubeadm.go:548] duration metric: took 118.993322ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0412 20:14:41.710512  282203 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:14:41.713571  282203 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:14:41.713602  282203 node_conditions.go:123] node cpu capacity is 8
	I0412 20:14:41.713615  282203 node_conditions.go:105] duration metric: took 3.097862ms to run NodePressure ...
	I0412 20:14:41.713630  282203 start.go:213] waiting for startup goroutines ...
	I0412 20:14:41.720393  282203 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:14:41.720422  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:14:41.720491  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.757709  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.804226  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:14:41.804481  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:14:41.804508  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:14:41.804720  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:14:41.804748  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:14:41.819378  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:14:41.819406  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:14:41.819826  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:14:41.819846  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:14:41.834332  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:14:41.834367  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:14:41.834666  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:14:41.834688  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:14:41.885128  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:14:41.885162  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:14:41.887023  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:14:41.887024  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:14:41.904985  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:14:41.905020  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:14:41.984315  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:14:41.984351  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:14:42.005906  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:14:42.005935  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:14:42.084416  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:14:42.084456  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:14:42.108756  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:14:42.108790  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:14:42.191600  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:14:42.390295  282203 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220412201253-42006"
	I0412 20:14:42.587518  282203 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:14:42.587549  282203 addons.go:417] enableAddons completed in 996.00198ms
	I0412 20:14:42.625739  282203 start.go:499] kubectl: 1.23.5, cluster: 1.23.6-rc.0 (minor skew: 0)
	I0412 20:14:42.628049  282203 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220412201253-42006" cluster and "default" namespace by default
	I0412 20:14:39.628134  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:41.628747  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:44.127896  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:46.627912  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:49.127578  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:51.627785  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:54.127667  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:56.627555  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:58.627673  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:01.127467  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:03.127958  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:05.627336  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:08.127482  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:10.128205  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:12.627006  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:14.627346  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:16.627715  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:19.127750  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:21.628033  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:24.127487  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:26.127773  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:28.627700  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:30.627863  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:32.627913  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:35.127918  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:37.627523  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:40.127924  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:42.627025  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:44.627571  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:46.628015  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:49.127289  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:51.627337  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:53.627707  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:56.127293  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:58.127903  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:00.128429  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:02.129651  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:04.627411  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:07.127206  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:09.128308  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:11.627780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:14.127483  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:16.627781  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:19.127539  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:21.627671  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:24.127732  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:26.627810  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:29.126973  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:31.128232  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:33.626978  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:35.627709  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:38.127682  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:40.627714  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:43.127935  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:45.627570  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:47.627702  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:50.127764  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:52.627288  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:55.127319  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:57.128161  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:59.627554  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:02.128657  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:04.627577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:07.127689  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:09.627222  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:12.127950  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:14.627403  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:17.127577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1bd2c2fccd8c5       6de166512aa22       1 second ago        Running             kindnet-cni               4                   72ec8def5691d
	f03411fc53304       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   72ec8def5691d
	d1642a69585f2       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   633802cf99325
	6cc69a6c92a9c       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   a58c9be88b91f
	e47ba7bc7187c       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   1038e52b21658
	f29f2d4e263bc       b2756210eeabf       12 minutes ago      Running             etcd                      0                   8b1dc4454ac4d
	e3d3ef830b73a       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   7042f76bd3470
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:04:30 UTC, end at Tue 2022-04-12 20:17:19 UTC. --
	Apr 12 20:10:37 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:37.217028672Z" level=info msg="RemoveContainer for \"019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227\" returns successfully"
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.676396846Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.691989104Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\""
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.692643513Z" level=info msg="StartContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\""
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.884793168Z" level=info msg="StartContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\" returns successfully"
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.136711579Z" level=info msg="shim disconnected" id=f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.136781903Z" level=warning msg="cleaning up after shim disconnected" id=f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b namespace=k8s.io
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.136805340Z" level=info msg="cleaning up dead shim"
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.147372302Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:13:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3555\n"
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.446586405Z" level=info msg="RemoveContainer for \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\""
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.452931148Z" level=info msg="RemoveContainer for \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\" returns successfully"
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.675938411Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.689783031Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4\""
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.690382317Z" level=info msg="StartContainer for \"f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4\""
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.884532000Z" level=info msg="StartContainer for \"f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4\" returns successfully"
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.123797718Z" level=info msg="shim disconnected" id=f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.123876227Z" level=warning msg="cleaning up after shim disconnected" id=f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 namespace=k8s.io
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.123887952Z" level=info msg="cleaning up dead shim"
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.135511059Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:16:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4017\n"
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.688914136Z" level=info msg="RemoveContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\""
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.695965899Z" level=info msg="RemoveContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\" returns successfully"
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.675948354Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.691522991Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1\""
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.692186237Z" level=info msg="StartContainer for \"1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1\""
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.884440017Z" level=info msg="StartContainer for \"1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220412200421-42006
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220412200421-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=old-k8s-version-20220412200421-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_04_59_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:04:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20220412200421-42006
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                0b57e9d3-0bbc-4976-a928-dc02ca892e39
	 Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	 Kernel Version:             5.13.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220412200421-42006                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-xxqjk                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20220412200421-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-controller-manager-old-k8s-version-20220412200421-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-nt4pk                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20220412200421-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                              Message
	  ----    ------                   ----               ----                                              -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20220412200421-42006  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5] <==
	* 2022-04-12 20:04:49.582091 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-04-12 20:04:49.806733 I | raft: 8688e899f7831fc7 is starting a new election at term 1
	2022-04-12 20:04:49.806783 I | raft: 8688e899f7831fc7 became candidate at term 2
	2022-04-12 20:04:49.806798 I | raft: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	2022-04-12 20:04:49.806811 I | raft: 8688e899f7831fc7 became leader at term 2
	2022-04-12 20:04:49.806819 I | raft: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2022-04-12 20:04:49.807090 I | etcdserver: published {Name:old-k8s-version-20220412200421-42006 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2022-04-12 20:04:49.807114 I | embed: ready to serve client requests
	2022-04-12 20:04:49.807165 I | etcdserver: setting up the initial cluster version to 3.3
	2022-04-12 20:04:49.807314 I | embed: ready to serve client requests
	2022-04-12 20:04:49.807714 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-04-12 20:04:49.807811 I | etcdserver/api: enabled capabilities for version 3.3
	2022-04-12 20:04:49.808554 I | embed: serving client requests on 192.168.67.2:2379
	2022-04-12 20:04:49.808691 I | embed: serving client requests on 127.0.0.1:2379
	2022-04-12 20:04:54.979482 W | etcdserver: request "header:<ID:2289939807800189654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:221 >> failure:<>>" with result "size:14" took too long (127.000495ms) to execute
	2022-04-12 20:04:54.980336 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (131.368725ms) to execute
	2022-04-12 20:04:54.981355 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:aggregate-to-view\" " with result "range_response_count:0 size:4" took too long (185.420261ms) to execute
	2022-04-12 20:05:08.444522 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" " with result "range_response_count:1 size:203" took too long (237.985152ms) to execute
	2022-04-12 20:05:08.611060 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replication-controller\" " with result "range_response_count:0 size:5" took too long (156.655583ms) to execute
	2022-04-12 20:05:08.611112 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (156.642288ms) to execute
	2022-04-12 20:05:11.193931 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:0 size:5" took too long (179.101374ms) to execute
	2022-04-12 20:05:11.556922 W | etcdserver: request "header:<ID:2289939807800190189 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/deployment-controller\" mod_revision:266 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" value_size:178 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" > >>" with result "size:16" took too long (184.09372ms) to execute
	2022-04-12 20:05:11.557051 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:0 size:5" took too long (259.936755ms) to execute
	2022-04-12 20:14:50.523759 I | mvcc: store.index: compact 453
	2022-04-12 20:14:50.524585 I | mvcc: finished scheduled compaction at 453 (took 486.812µs)
	
	* 
	* ==> kernel <==
	*  20:17:19 up  2:59,  0 users,  load average: 0.19, 0.74, 1.36
	Linux old-k8s-version-20220412200421-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db] <==
	* I0412 20:04:53.817855       1 naming_controller.go:288] Starting NamingConditionController
	I0412 20:04:53.817876       1 establishing_controller.go:73] Starting EstablishingController
	I0412 20:04:53.817895       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0412 20:04:53.821058       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0412 20:04:53.886711       1 cache.go:39] Caches are synced for autoregister controller
	I0412 20:04:53.888960       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0412 20:04:53.894066       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 20:04:53.912646       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:04:54.785212       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0412 20:04:54.785323       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:04:54.785532       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:04:54.981976       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0412 20:04:54.989210       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:04:54.989520       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0412 20:04:55.602026       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:04:56.835537       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:04:57.115593       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0412 20:04:57.408794       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0412 20:04:57.409902       1 controller.go:606] quota admission added evaluator for: endpoints
	I0412 20:04:58.035069       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0412 20:04:58.723065       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0412 20:04:59.062703       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0412 20:05:14.419802       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:05:14.457130       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0412 20:05:14.798379       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1] <==
	* I0412 20:05:14.416160       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0412 20:05:14.416404       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0412 20:05:14.416449       1 shared_informer.go:204] Caches are synced for GC 
	I0412 20:05:14.416458       1 shared_informer.go:204] Caches are synced for stateful set 
	I0412 20:05:14.420747       1 shared_informer.go:204] Caches are synced for namespace 
	I0412 20:05:14.446207       1 log.go:172] [INFO] signed certificate with serial number 553674720293122649670790457411009586856850398380
	I0412 20:05:14.452389       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d91a3f48-91ea-4047-96eb-febc4fd5896f", APIVersion:"apps/v1", ResourceVersion:"198", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-nt4pk
	I0412 20:05:14.453892       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"58fcbd78-08ad-4c23-81c3-6b4bc4796f4f", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xxqjk
	E0412 20:05:14.485627       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d91a3f48-91ea-4047-96eb-febc4fd5896f", ResourceVersion:"198", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785390699, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014eb6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001683ec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014eb700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014eb720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014eb760)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0017e04b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016e8778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00168ede0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00099e7e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016e87b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0412 20:05:14.499652       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0412 20:05:14.512453       1 range_allocator.go:359] Set node old-k8s-version-20220412200421-42006 PodCIDR to [10.244.0.0/24]
	I0412 20:05:14.581829       1 shared_informer.go:204] Caches are synced for HPA 
	I0412 20:05:14.766250       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0412 20:05:14.796326       1 shared_informer.go:204] Caches are synced for deployment 
	I0412 20:05:14.802095       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c3850259-9414-497e-b19b-05b488cd9753", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0412 20:05:14.808727       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"1497655a-7413-453d-bf35-8edfda600b44", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-z6lnj
	I0412 20:05:14.815644       1 shared_informer.go:204] Caches are synced for disruption 
	I0412 20:05:14.815672       1 disruption.go:341] Sending events to api server.
	I0412 20:05:14.882180       1 shared_informer.go:204] Caches are synced for resource quota 
	I0412 20:05:14.920223       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0412 20:05:14.920251       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:05:14.920729       1 shared_informer.go:204] Caches are synced for resource quota 
	I0412 20:05:14.978270       1 shared_informer.go:204] Caches are synced for job 
	I0412 20:05:15.817797       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0412 20:05:15.924972       1 shared_informer.go:204] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b] <==
	* W0412 20:05:15.109854       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0412 20:05:15.118694       1 node.go:135] Successfully retrieved node IP: 192.168.67.2
	I0412 20:05:15.118739       1 server_others.go:149] Using iptables Proxier.
	I0412 20:05:15.119285       1 server.go:529] Version: v1.16.0
	I0412 20:05:15.119941       1 config.go:131] Starting endpoints config controller
	I0412 20:05:15.119963       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0412 20:05:15.119997       1 config.go:313] Starting service config controller
	I0412 20:05:15.120007       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0412 20:05:15.220204       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0412 20:05:15.220290       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133] <==
	* I0412 20:04:53.828463       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0412 20:04:53.829174       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0412 20:04:53.893487       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:04:53.893757       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:53.893903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:04:53.895116       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:53.895227       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:04:53.895262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:04:53.896417       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:04:53.896583       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:04:53.898962       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:04:53.899567       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:04:53.899864       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:04:54.895250       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:04:54.898563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:54.899824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:04:54.900936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:54.909762       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:04:54.911797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:04:54.914318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:04:54.915374       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:04:54.916368       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:04:54.923327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:04:54.982883       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:05:14.813397       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:04:30 UTC, end at Tue 2022-04-12 20:17:19 UTC. --
	Apr 12 20:15:33 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:33.875100     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:38 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:38.876035     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:43 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:43.876805     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:48 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:48.877490     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:53 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:53.878412     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:58 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:58.879255     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:03.880112     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:08 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:08.880914     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:13.881695     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:18 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:18.882522     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:23 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:23.883236     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:28 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:28.883959     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:33 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:33.884767     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:34.688870     896 pod_workers.go:191] Error syncing pod 306e6dc0-594c-4013-acc5-0fcbdf38806f ("kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"
	Apr 12 20:16:38 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:38.885578     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:43 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:43.886347     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:48 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:48.887056     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:49 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:49.673634     896 pod_workers.go:191] Error syncing pod 306e6dc0-594c-4013-acc5-0fcbdf38806f ("kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"
	Apr 12 20:16:53 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:53.887920     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:58 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:58.888739     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:03.673654     896 pod_workers.go:191] Error syncing pod 306e6dc0-594c-4013-acc5-0fcbdf38806f ("kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"
	Apr 12 20:17:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:03.889536     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:08 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:08.890240     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:13.891084     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:18 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:18.891960     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-5644d7b6d9-z6lnj storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe pod busybox coredns-5644d7b6d9-z6lnj storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 describe pod busybox coredns-5644d7b6d9-z6lnj storage-provisioner: exit status 1 (61.136778ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b5lb8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-b5lb8:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-b5lb8
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m2s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m26s (x1 over 6m56s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-z6lnj" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220412200421-42006 describe pod busybox coredns-5644d7b6d9-z6lnj storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220412200421-42006
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220412200421-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42",
	        "Created": "2022-04-12T20:04:30.270409412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249540,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:04:30.654643592Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42-json.log",
	        "Name": "/old-k8s-version-20220412200421-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220412200421-42006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220412200421-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220412200421-42006",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220412200421-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220412200421-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ad84289742f0dfbd44646dfe51c90a2743ffb78bf6626291683c05a3d95eee0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49392"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49391"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49388"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49390"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49389"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ad84289742f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220412200421-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a5e4ff2bbf6e",
	                        "old-k8s-version-20220412200421-42006"
	                    ],
	                    "NetworkID": "0b96a6a249d72d5fff5d5b9db029edbfc6a07a56e8064108c65000591927cbc6",
	                    "EndpointID": "c3007d28c5878ca69ad88197e01438f31f4f4f7d8152c555a927532e6a59c8f3",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25
E0412 20:17:21.306939   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:37 UTC | Tue, 12 Apr 2022 20:06:38 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                            |         |         |                               |                               |
	| start   | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:02:42 UTC | Tue, 12 Apr 2022 20:07:57 UTC |
	|         | --memory=2048                                              |                                            |         |         |                               |                               |
	|         | --alsologtostderr                                          |                                            |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                            |         |         |                               |                               |
	|         | --cni=bridge --driver=docker                               |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                            |         |         |                               |                               |
	| ssh     | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:07:57 UTC | Tue, 12 Apr 2022 20:07:58 UTC |
	|         | pgrep -a kubelet                                           |                                            |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:09:15 UTC | Tue, 12 Apr 2022 20:09:16 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006           | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:10:08 UTC | Tue, 12 Apr 2022 20:10:09 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:38 UTC | Tue, 12 Apr 2022 20:12:02 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                            |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                            |         |         |                               |                               |
	|         | --driver=docker                                            |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:20 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                            |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:21 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:22 UTC | Tue, 12 Apr 2022 20:12:23 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:24 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                            |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                            |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                            |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                            |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                            |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                            |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                            |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                            |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                            |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                            |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                            |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                            |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                            |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                            |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:14:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:14:08.832397  282203 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:14:08.832526  282203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:14:08.832537  282203 out.go:310] Setting ErrFile to fd 2...
	I0412 20:14:08.832541  282203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:14:08.832644  282203 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:14:08.832908  282203 out.go:304] Setting JSON to false
	I0412 20:14:08.834493  282203 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10602,"bootTime":1649783847,"procs":547,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:14:08.834611  282203 start.go:125] virtualization: kvm guest
	I0412 20:14:08.837207  282203 out.go:176] * [newest-cni-20220412201253-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:14:08.838808  282203 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:14:08.837440  282203 notify.go:193] Checking for updates...
	I0412 20:14:08.840190  282203 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:14:08.841789  282203 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:08.843251  282203 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:14:08.844774  282203 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:14:08.845319  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:08.845793  282203 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:14:08.892101  282203 docker.go:137] docker version: linux-20.10.14
	I0412 20:14:08.892248  282203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:14:08.993547  282203 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:14:08.923798845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:14:08.993679  282203 docker.go:254] overlay module found
	I0412 20:14:08.996175  282203 out.go:176] * Using the docker driver based on existing profile
	I0412 20:14:08.996210  282203 start.go:284] selected driver: docker
	I0412 20:14:08.996217  282203 start.go:801] validating driver "docker" against &{Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Met
ricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:08.996338  282203 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:14:08.996376  282203 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:14:08.996397  282203 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:14:08.998211  282203 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:14:08.998861  282203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:14:09.094596  282203 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:14:09.030624528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:14:09.094806  282203 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:14:09.094836  282203 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:14:09.096887  282203 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:14:09.097012  282203 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0412 20:14:09.097039  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:09.097046  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:09.097054  282203 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:14:09.097062  282203 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:14:09.097069  282203 start_flags.go:306] config:
	{Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false d
efault_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:09.099506  282203 out.go:176] * Starting control plane node newest-cni-20220412201253-42006 in cluster newest-cni-20220412201253-42006
	I0412 20:14:09.099556  282203 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:14:09.101249  282203 out.go:176] * Pulling base image ...
	I0412 20:14:09.101287  282203 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:14:09.101322  282203 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:14:09.101342  282203 cache.go:57] Caching tarball of preloaded images
	I0412 20:14:09.101401  282203 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:14:09.101566  282203 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:14:09.101582  282203 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on containerd
	I0412 20:14:09.101721  282203 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/config.json ...
	I0412 20:14:09.147707  282203 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:14:09.147734  282203 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:14:09.147748  282203 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:14:09.147784  282203 start.go:352] acquiring machines lock for newest-cni-20220412201253-42006: {Name:mk0dccf8a2654d003d8787479cf4abb87e60a916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:14:09.147896  282203 start.go:356] acquired machines lock for "newest-cni-20220412201253-42006" in 84.854µs
	I0412 20:14:09.147923  282203 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:14:09.147932  282203 fix.go:55] fixHost starting: 
	I0412 20:14:09.148209  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:09.182695  282203 fix.go:103] recreateIfNeeded on newest-cni-20220412201253-42006: state=Stopped err=<nil>
	W0412 20:14:09.182743  282203 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:14:09.128201  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:11.627831  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:09.185311  282203 out.go:176] * Restarting existing docker container for "newest-cni-20220412201253-42006" ...
	I0412 20:14:09.185403  282203 cli_runner.go:164] Run: docker start newest-cni-20220412201253-42006
	I0412 20:14:09.582922  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:09.620698  282203 kic.go:416] container "newest-cni-20220412201253-42006" state is running.
	I0412 20:14:09.621213  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:09.657122  282203 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/config.json ...
	I0412 20:14:09.657367  282203 machine.go:88] provisioning docker machine ...
	I0412 20:14:09.657398  282203 ubuntu.go:169] provisioning hostname "newest-cni-20220412201253-42006"
	I0412 20:14:09.657457  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:09.694424  282203 main.go:134] libmachine: Using SSH client type: native
	I0412 20:14:09.694593  282203 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0412 20:14:09.694609  282203 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220412201253-42006 && echo "newest-cni-20220412201253-42006" | sudo tee /etc/hostname
	I0412 20:14:09.695270  282203 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55074->127.0.0.1:49422: read: connection reset by peer
	I0412 20:14:12.826188  282203 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220412201253-42006
	
	I0412 20:14:12.826283  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:12.860717  282203 main.go:134] libmachine: Using SSH client type: native
	I0412 20:14:12.860887  282203 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0412 20:14:12.860908  282203 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220412201253-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220412201253-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220412201253-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:14:12.984427  282203 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:14:12.984458  282203 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:14:12.984485  282203 ubuntu.go:177] setting up certificates
	I0412 20:14:12.984495  282203 provision.go:83] configureAuth start
	I0412 20:14:12.984546  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:13.022286  282203 provision.go:138] copyHostCerts
	I0412 20:14:13.022359  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:14:13.022434  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:14:13.022507  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:14:13.022629  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:14:13.022645  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:14:13.022670  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:14:13.022733  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:14:13.022741  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:14:13.022761  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:14:13.022827  282203 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220412201253-42006 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220412201253-42006]
	I0412 20:14:13.147393  282203 provision.go:172] copyRemoteCerts
	I0412 20:14:13.147461  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:14:13.147499  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.182738  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.271719  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:14:13.291955  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0412 20:14:13.311640  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:14:13.330587  282203 provision.go:86] duration metric: configureAuth took 346.079902ms
	I0412 20:14:13.330615  282203 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:14:13.330805  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:13.330817  282203 machine.go:91] provisioned docker machine in 3.673434359s
	I0412 20:14:13.330823  282203 start.go:306] post-start starting for "newest-cni-20220412201253-42006" (driver="docker")
	I0412 20:14:13.330829  282203 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:14:13.330883  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:14:13.330918  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.365737  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.460195  282203 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:14:13.463475  282203 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:14:13.463524  282203 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:14:13.463538  282203 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:14:13.463544  282203 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:14:13.463556  282203 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:14:13.463617  282203 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:14:13.463682  282203 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:14:13.463765  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:14:13.471624  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:14:13.491654  282203 start.go:309] post-start completed in 160.815375ms
	I0412 20:14:13.491734  282203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:14:13.491791  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.529484  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.616940  282203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:14:13.621059  282203 fix.go:57] fixHost completed within 4.473117291s
	I0412 20:14:13.621091  282203 start.go:81] releasing machines lock for "newest-cni-20220412201253-42006", held for 4.473181182s
	I0412 20:14:13.621178  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:13.655978  282203 ssh_runner.go:195] Run: systemctl --version
	I0412 20:14:13.656014  282203 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:14:13.656038  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.656108  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.692203  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.693258  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.795984  282203 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:14:13.808689  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:14:13.820011  282203 docker.go:183] disabling docker service ...
	I0412 20:14:13.820092  282203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:14:13.830551  282203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:14:14.127986  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:16.627569  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:13.840509  282203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:14:13.920197  282203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:14:13.996299  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:14:14.006773  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:14:14.020629  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:14:14.035412  282203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:14:14.042432  282203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:14:14.049388  282203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:14:14.128037  282203 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:14:14.201778  282203 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:14:14.201900  282203 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:14:14.206190  282203 start.go:462] Will wait 60s for crictl version
	I0412 20:14:14.206249  282203 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:14:14.233021  282203 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:14:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:14:19.127780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:21.627899  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:25.280259  282203 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:14:25.305913  282203 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:14:25.305972  282203 ssh_runner.go:195] Run: containerd --version
	I0412 20:14:25.329153  282203 ssh_runner.go:195] Run: containerd --version
	I0412 20:14:25.353837  282203 out.go:176] * Preparing Kubernetes v1.23.6-rc.0 on containerd 1.5.10 ...
	I0412 20:14:25.353941  282203 cli_runner.go:164] Run: docker network inspect newest-cni-20220412201253-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:14:25.390025  282203 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0412 20:14:25.393752  282203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:14:25.406736  282203 out.go:176]   - kubelet.network-plugin=cni
	I0412 20:14:25.408682  282203 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0412 20:14:24.127325  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:26.127416  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.127721  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:25.410319  282203 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:14:25.410383  282203 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:14:25.410438  282203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:14:25.435000  282203 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:14:25.435025  282203 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:14:25.435069  282203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:14:25.460785  282203 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:14:25.460815  282203 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:14:25.460865  282203 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:14:25.486553  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:25.486581  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:25.486596  282203 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0412 20:14:25.486612  282203 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220412201253-42006 NodeName:newest-cni-20220412201253-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leade
r-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:14:25.486771  282203 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220412201253-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:14:25.486858  282203 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220412201253-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:14:25.486911  282203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6-rc.0
	I0412 20:14:25.495243  282203 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:14:25.495328  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:14:25.502983  282203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (618 bytes)
	I0412 20:14:25.516969  282203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0412 20:14:25.530231  282203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2201 bytes)
	I0412 20:14:25.544174  282203 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:14:25.547463  282203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:14:25.557235  282203 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006 for IP: 192.168.76.2
	I0412 20:14:25.557346  282203 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:14:25.557383  282203 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:14:25.557447  282203 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/client.key
	I0412 20:14:25.557553  282203 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.key.31bdca25
	I0412 20:14:25.557606  282203 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.key
	I0412 20:14:25.557698  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:14:25.557730  282203 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:14:25.557745  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:14:25.557768  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:14:25.557791  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:14:25.557819  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:14:25.557861  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:14:25.558574  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:14:25.577575  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0412 20:14:25.597461  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:14:25.617831  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:14:25.637035  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:14:25.655577  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:14:25.673593  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:14:25.693796  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:14:25.713653  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:14:25.732646  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:14:25.751515  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:14:25.770576  282203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:14:25.784726  282203 ssh_runner.go:195] Run: openssl version
	I0412 20:14:25.790079  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:14:25.799378  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.802945  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.803028  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.808734  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:14:25.816535  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:14:25.825325  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.828750  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.828803  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.834167  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:14:25.841792  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:14:25.850010  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.853624  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.853701  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.859058  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:14:25.866757  282203 kubeadm.go:391] StartCluster: {Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.doma
in] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:25.866859  282203 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:14:25.866908  282203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:14:25.894574  282203 cri.go:87] found id: "d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67"
	I0412 20:14:25.894601  282203 cri.go:87] found id: "1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9"
	I0412 20:14:25.894608  282203 cri.go:87] found id: "0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd"
	I0412 20:14:25.894614  282203 cri.go:87] found id: "a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3"
	I0412 20:14:25.894619  282203 cri.go:87] found id: "86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356"
	I0412 20:14:25.894631  282203 cri.go:87] found id: "7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d"
	I0412 20:14:25.894637  282203 cri.go:87] found id: ""
	I0412 20:14:25.894696  282203 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:14:25.909659  282203 cri.go:114] JSON = null
	W0412 20:14:25.909724  282203 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:14:25.909774  282203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:14:25.917474  282203 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:14:25.917508  282203 kubeadm.go:601] restartCluster start
	I0412 20:14:25.917553  282203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:14:25.925481  282203 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:25.926482  282203 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220412201253-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:25.927149  282203 kubeconfig.go:127] "newest-cni-20220412201253-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:14:25.928050  282203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:25.929973  282203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:14:25.937574  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:25.937643  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:25.946692  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.147196  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.147313  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.157070  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.347407  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.347480  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.356517  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.547770  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.547871  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.557039  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.747366  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.747450  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.757308  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.947424  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.947524  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.956488  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.147733  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.147821  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.156974  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.347245  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.347355  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.356556  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.547767  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.547845  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.557055  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.747315  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.747407  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.756437  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.947668  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.947755  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.956980  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.147211  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.147335  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.156358  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.347634  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.347710  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.356777  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.546978  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.547079  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.555852  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.746989  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.747054  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.755735  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:30.627141  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:32.627877  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.947273  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.947359  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.956917  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.956943  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.956997  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.965673  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.965703  282203 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:14:28.965712  282203 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:14:28.965726  282203 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:14:28.965780  282203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:14:28.994340  282203 cri.go:87] found id: "d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67"
	I0412 20:14:28.994369  282203 cri.go:87] found id: "1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9"
	I0412 20:14:28.994378  282203 cri.go:87] found id: "0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd"
	I0412 20:14:28.994392  282203 cri.go:87] found id: "a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3"
	I0412 20:14:28.994401  282203 cri.go:87] found id: "86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356"
	I0412 20:14:28.994410  282203 cri.go:87] found id: "7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d"
	I0412 20:14:28.994419  282203 cri.go:87] found id: ""
	I0412 20:14:28.994431  282203 cri.go:232] Stopping containers: [d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67 1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9 0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3 86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356 7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d]
	I0412 20:14:28.994486  282203 ssh_runner.go:195] Run: which crictl
	I0412 20:14:28.997755  282203 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67 1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9 0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3 86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356 7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d
	I0412 20:14:29.026024  282203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:14:29.037162  282203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:14:29.044772  282203 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Apr 12 20:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 12 20:13 /etc/kubernetes/scheduler.conf
	
	I0412 20:14:29.044835  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:14:29.052237  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:14:29.059409  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:14:29.066564  282203 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:29.066629  282203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:14:29.073927  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:14:29.081806  282203 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:29.081873  282203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:14:29.089097  282203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:14:29.097286  282203 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:14:29.097318  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.143554  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.837517  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.985443  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:30.038605  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:30.112525  282203 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:14:30.112599  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:30.622626  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:31.122421  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:31.622412  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:32.122000  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:32.622749  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:33.122311  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:33.622220  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.128008  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:37.628055  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:34.122750  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:34.622370  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.122375  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.622023  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:36.122611  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:36.192499  282203 api_server.go:71] duration metric: took 6.079970753s to wait for apiserver process to appear ...
	I0412 20:14:36.192531  282203 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:14:36.192547  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:36.192951  282203 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0412 20:14:36.693238  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.081785  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:14:39.081830  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:14:39.193101  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.198543  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:39.198577  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:39.693125  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.698513  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:39.698546  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:40.194142  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:40.199360  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:40.199402  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:40.693984  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:40.698538  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:14:40.704599  282203 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:14:40.704627  282203 api_server.go:130] duration metric: took 4.512088959s to wait for apiserver health ...
	I0412 20:14:40.704637  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:40.704648  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:40.707243  282203 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:14:40.707307  282203 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:14:40.711258  282203 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl ...
	I0412 20:14:40.711285  282203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:14:40.725079  282203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:14:41.409231  282203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:14:41.417820  282203 system_pods.go:59] 9 kube-system pods found
	I0412 20:14:41.417861  282203 system_pods.go:61] "coredns-64897985d-4bvbc" [fb9e8493-9c0d-4e05-b53a-1749537e5040] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417873  282203 system_pods.go:61] "etcd-newest-cni-20220412201253-42006" [3aad179e-c3c7-4666-a6d3-d255640590a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:14:41.417889  282203 system_pods.go:61] "kindnet-n5jt7" [a91f07c6-2b78-4581-b9ac-f3a3c3626dd8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:14:41.417894  282203 system_pods.go:61] "kube-apiserver-newest-cni-20220412201253-42006" [2d4d9c73-5232-4a9c-99fb-7b9006cf532b] Running
	I0412 20:14:41.417903  282203 system_pods.go:61] "kube-controller-manager-newest-cni-20220412201253-42006" [ddacb408-0fe4-4726-b426-a84e7d23a1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:14:41.417913  282203 system_pods.go:61] "kube-proxy-jp96c" [3b9c939e-cafa-4614-a930-02dbf11e941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:14:41.417920  282203 system_pods.go:61] "kube-scheduler-newest-cni-20220412201253-42006" [7cc7f50d-6fe0-405a-9438-00b84708bcdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:14:41.417932  282203 system_pods.go:61] "metrics-server-b955d9d8-99nk4" [68d97c36-9d61-4926-bd17-e63396989cc8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417938  282203 system_pods.go:61] "storage-provisioner" [43ce4397-4b28-450b-b967-f8f2b597585c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417944  282203 system_pods.go:74] duration metric: took 8.691981ms to wait for pod list to return data ...
	I0412 20:14:41.417956  282203 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:14:41.421510  282203 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:14:41.421537  282203 node_conditions.go:123] node cpu capacity is 8
	I0412 20:14:41.421549  282203 node_conditions.go:105] duration metric: took 3.589136ms to run NodePressure ...
	I0412 20:14:41.421570  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:41.576233  282203 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:14:41.583862  282203 ops.go:34] apiserver oom_adj: -16
	I0412 20:14:41.583887  282203 kubeadm.go:605] restartCluster took 15.666373103s
	I0412 20:14:41.583897  282203 kubeadm.go:393] StartCluster complete in 15.717149501s
	I0412 20:14:41.583915  282203 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:41.584019  282203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:41.586119  282203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:41.591379  282203 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220412201253-42006" rescaled to 1
	I0412 20:14:41.591451  282203 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:14:41.593719  282203 out.go:176] * Verifying Kubernetes components...
	I0412 20:14:41.591533  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:14:41.591554  282203 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0412 20:14:41.591660  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:41.593837  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:14:41.593881  282203 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593907  282203 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.593912  282203 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:14:41.593947  282203 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593971  282203 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593979  282203 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220412201253-42006"
	I0412 20:14:41.593992  282203 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.594005  282203 addons.go:165] addon metrics-server should already be in state true
	I0412 20:14:41.593995  282203 addons.go:65] Setting dashboard=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.594043  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.593973  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.594045  282203 addons.go:153] Setting addon dashboard=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.594280  282203 addons.go:165] addon dashboard should already be in state true
	I0412 20:14:41.594328  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.594334  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594502  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594639  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594799  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.645907  282203 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:14:41.648341  282203 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:14:41.650175  282203 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:14:41.651645  282203 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:14:41.648424  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:14:41.651681  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:14:41.650260  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:14:41.651782  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:14:41.651798  282203 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:14:41.651811  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:14:41.651751  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.651850  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.651850  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.667707  282203 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.667739  282203 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:14:41.667770  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.668264  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.679431  282203 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0412 20:14:41.679495  282203 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:14:41.679542  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:41.692016  282203 api_server.go:71] duration metric: took 100.509345ms to wait for apiserver process to appear ...
	I0412 20:14:41.692053  282203 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:14:41.692097  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:41.698410  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:14:41.699444  282203 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:14:41.699470  282203 api_server.go:130] duration metric: took 7.409196ms to wait for apiserver health ...
	I0412 20:14:41.699481  282203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:14:41.701111  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.706303  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.706406  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.707318  282203 system_pods.go:59] 9 kube-system pods found
	I0412 20:14:41.707353  282203 system_pods.go:61] "coredns-64897985d-4bvbc" [fb9e8493-9c0d-4e05-b53a-1749537e5040] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707367  282203 system_pods.go:61] "etcd-newest-cni-20220412201253-42006" [3aad179e-c3c7-4666-a6d3-d255640590a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:14:41.707377  282203 system_pods.go:61] "kindnet-n5jt7" [a91f07c6-2b78-4581-b9ac-f3a3c3626dd8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:14:41.707389  282203 system_pods.go:61] "kube-apiserver-newest-cni-20220412201253-42006" [2d4d9c73-5232-4a9c-99fb-7b9006cf532b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:14:41.707406  282203 system_pods.go:61] "kube-controller-manager-newest-cni-20220412201253-42006" [ddacb408-0fe4-4726-b426-a84e7d23a1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:14:41.707419  282203 system_pods.go:61] "kube-proxy-jp96c" [3b9c939e-cafa-4614-a930-02dbf11e941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:14:41.707429  282203 system_pods.go:61] "kube-scheduler-newest-cni-20220412201253-42006" [7cc7f50d-6fe0-405a-9438-00b84708bcdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:14:41.707446  282203 system_pods.go:61] "metrics-server-b955d9d8-99nk4" [68d97c36-9d61-4926-bd17-e63396989cc8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707460  282203 system_pods.go:61] "storage-provisioner" [43ce4397-4b28-450b-b967-f8f2b597585c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707470  282203 system_pods.go:74] duration metric: took 7.981821ms to wait for pod list to return data ...
	I0412 20:14:41.707485  282203 default_sa.go:34] waiting for default service account to be created ...
	I0412 20:14:41.710431  282203 default_sa.go:45] found service account: "default"
	I0412 20:14:41.710468  282203 default_sa.go:55] duration metric: took 2.960657ms for default service account to be created ...
	I0412 20:14:41.710484  282203 kubeadm.go:548] duration metric: took 118.993322ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0412 20:14:41.710512  282203 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:14:41.713571  282203 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:14:41.713602  282203 node_conditions.go:123] node cpu capacity is 8
	I0412 20:14:41.713615  282203 node_conditions.go:105] duration metric: took 3.097862ms to run NodePressure ...
	I0412 20:14:41.713630  282203 start.go:213] waiting for startup goroutines ...
	I0412 20:14:41.720393  282203 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:14:41.720422  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:14:41.720491  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.757709  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.804226  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:14:41.804481  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:14:41.804508  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:14:41.804720  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:14:41.804748  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:14:41.819378  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:14:41.819406  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:14:41.819826  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:14:41.819846  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:14:41.834332  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:14:41.834367  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:14:41.834666  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:14:41.834688  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:14:41.885128  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:14:41.885162  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:14:41.887023  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:14:41.887024  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:14:41.904985  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:14:41.905020  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:14:41.984315  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:14:41.984351  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:14:42.005906  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:14:42.005935  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:14:42.084416  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:14:42.084456  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:14:42.108756  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:14:42.108790  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:14:42.191600  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:14:42.390295  282203 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220412201253-42006"
	I0412 20:14:42.587518  282203 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:14:42.587549  282203 addons.go:417] enableAddons completed in 996.00198ms
	I0412 20:14:42.625739  282203 start.go:499] kubectl: 1.23.5, cluster: 1.23.6-rc.0 (minor skew: 0)
	I0412 20:14:42.628049  282203 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220412201253-42006" cluster and "default" namespace by default
	I0412 20:14:39.628134  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:41.628747  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:44.127896  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:46.627912  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:49.127578  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:51.627785  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:54.127667  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:56.627555  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:58.627673  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:01.127467  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:03.127958  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:05.627336  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:08.127482  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:10.128205  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:12.627006  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:14.627346  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:16.627715  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:19.127750  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:21.628033  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:24.127487  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:26.127773  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:28.627700  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:30.627863  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:32.627913  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:35.127918  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:37.627523  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:40.127924  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:42.627025  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:44.627571  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:46.628015  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:49.127289  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:51.627337  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:53.627707  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:56.127293  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:58.127903  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:00.128429  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:02.129651  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:04.627411  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:07.127206  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:09.128308  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:11.627780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:14.127483  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:16.627781  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:19.127539  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:21.627671  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:24.127732  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:26.627810  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:29.126973  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:31.128232  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:33.626978  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:35.627709  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:38.127682  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:40.627714  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:43.127935  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:45.627570  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:47.627702  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:50.127764  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:52.627288  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:55.127319  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:57.128161  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:59.627554  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:02.128657  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:04.627577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:07.127689  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:09.627222  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:12.127950  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:14.627403  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:17.127577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1bd2c2fccd8c5       6de166512aa22       3 seconds ago       Running             kindnet-cni               4                   72ec8def5691d
	f03411fc53304       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   72ec8def5691d
	d1642a69585f2       c21b0c7400f98       12 minutes ago      Running             kube-proxy                0                   633802cf99325
	6cc69a6c92a9c       301ddc62b80b1       12 minutes ago      Running             kube-scheduler            0                   a58c9be88b91f
	e47ba7bc7187c       b305571ca60a5       12 minutes ago      Running             kube-apiserver            0                   1038e52b21658
	f29f2d4e263bc       b2756210eeabf       12 minutes ago      Running             etcd                      0                   8b1dc4454ac4d
	e3d3ef830b73a       06a629a7e51cd       12 minutes ago      Running             kube-controller-manager   0                   7042f76bd3470
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:04:30 UTC, end at Tue 2022-04-12 20:17:21 UTC. --
	Apr 12 20:10:37 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:37.217028672Z" level=info msg="RemoveContainer for \"019c66def7622dba48d959bc981c7d3e780afe2450172b618014e5aa7f78e227\" returns successfully"
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.676396846Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.691989104Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\""
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.692643513Z" level=info msg="StartContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\""
	Apr 12 20:10:48 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:10:48.884793168Z" level=info msg="StartContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\" returns successfully"
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.136711579Z" level=info msg="shim disconnected" id=f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.136781903Z" level=warning msg="cleaning up after shim disconnected" id=f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b namespace=k8s.io
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.136805340Z" level=info msg="cleaning up dead shim"
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.147372302Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:13:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3555\n"
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.446586405Z" level=info msg="RemoveContainer for \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\""
	Apr 12 20:13:29 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:29.452931148Z" level=info msg="RemoveContainer for \"78996594d04da29b800c294937702cde8e1c1ed203ac6a1a024c00cbba2b0c74\" returns successfully"
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.675938411Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.689783031Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4\""
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.690382317Z" level=info msg="StartContainer for \"f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4\""
	Apr 12 20:13:53 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:13:53.884532000Z" level=info msg="StartContainer for \"f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4\" returns successfully"
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.123797718Z" level=info msg="shim disconnected" id=f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.123876227Z" level=warning msg="cleaning up after shim disconnected" id=f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 namespace=k8s.io
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.123887952Z" level=info msg="cleaning up dead shim"
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.135511059Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:16:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4017\n"
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.688914136Z" level=info msg="RemoveContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\""
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:16:34.695965899Z" level=info msg="RemoveContainer for \"f15a1baf56cab3bcc159300dee79248a0ee811277a6810065b58050c96a7f78b\" returns successfully"
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.675948354Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.691522991Z" level=info msg="CreateContainer within sandbox \"72ec8def5691dad6428509dd888e491782c456828105fcda0a80993268baecd8\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1\""
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.692186237Z" level=info msg="StartContainer for \"1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1\""
	Apr 12 20:17:17 old-k8s-version-20220412200421-42006 containerd[471]: time="2022-04-12T20:17:17.884440017Z" level=info msg="StartContainer for \"1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220412200421-42006
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220412200421-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=old-k8s-version-20220412200421-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_04_59_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:04:53 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:16:54 +0000   Tue, 12 Apr 2022 20:04:50 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20220412200421-42006
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                0b57e9d3-0bbc-4976-a928-dc02ca892e39
	 Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	 Kernel Version:             5.13.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220412200421-42006                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kindnet-xxqjk                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                kube-apiserver-old-k8s-version-20220412200421-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-controller-manager-old-k8s-version-20220412200421-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-nt4pk                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-scheduler-old-k8s-version-20220412200421-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                              Message
	  ----    ------                   ----               ----                                              -------
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kube-proxy, old-k8s-version-20220412200421-42006  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5] <==
	* 2022-04-12 20:04:49.582091 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-04-12 20:04:49.806733 I | raft: 8688e899f7831fc7 is starting a new election at term 1
	2022-04-12 20:04:49.806783 I | raft: 8688e899f7831fc7 became candidate at term 2
	2022-04-12 20:04:49.806798 I | raft: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	2022-04-12 20:04:49.806811 I | raft: 8688e899f7831fc7 became leader at term 2
	2022-04-12 20:04:49.806819 I | raft: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2022-04-12 20:04:49.807090 I | etcdserver: published {Name:old-k8s-version-20220412200421-42006 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2022-04-12 20:04:49.807114 I | embed: ready to serve client requests
	2022-04-12 20:04:49.807165 I | etcdserver: setting up the initial cluster version to 3.3
	2022-04-12 20:04:49.807314 I | embed: ready to serve client requests
	2022-04-12 20:04:49.807714 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-04-12 20:04:49.807811 I | etcdserver/api: enabled capabilities for version 3.3
	2022-04-12 20:04:49.808554 I | embed: serving client requests on 192.168.67.2:2379
	2022-04-12 20:04:49.808691 I | embed: serving client requests on 127.0.0.1:2379
	2022-04-12 20:04:54.979482 W | etcdserver: request "header:<ID:2289939807800189654 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-node-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-node-critical\" value_size:221 >> failure:<>>" with result "size:14" took too long (127.000495ms) to execute
	2022-04-12 20:04:54.980336 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (131.368725ms) to execute
	2022-04-12 20:04:54.981355 W | etcdserver: read-only range request "key:\"/registry/clusterroles/system:aggregate-to-view\" " with result "range_response_count:0 size:4" took too long (185.420261ms) to execute
	2022-04-12 20:05:08.444522 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" " with result "range_response_count:1 size:203" took too long (237.985152ms) to execute
	2022-04-12 20:05:08.611060 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/replication-controller\" " with result "range_response_count:0 size:5" took too long (156.655583ms) to execute
	2022-04-12 20:05:08.611112 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (156.642288ms) to execute
	2022-04-12 20:05:11.193931 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/deployment-controller\" " with result "range_response_count:0 size:5" took too long (179.101374ms) to execute
	2022-04-12 20:05:11.556922 W | etcdserver: request "header:<ID:2289939807800190189 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/deployment-controller\" mod_revision:266 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" value_size:178 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/deployment-controller\" > >>" with result "size:16" took too long (184.09372ms) to execute
	2022-04-12 20:05:11.557051 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:0 size:5" took too long (259.936755ms) to execute
	2022-04-12 20:14:50.523759 I | mvcc: store.index: compact 453
	2022-04-12 20:14:50.524585 I | mvcc: finished scheduled compaction at 453 (took 486.812µs)
	
	* 
	* ==> kernel <==
	*  20:17:21 up  2:59,  0 users,  load average: 0.17, 0.72, 1.35
	Linux old-k8s-version-20220412200421-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db] <==
	* I0412 20:04:53.817855       1 naming_controller.go:288] Starting NamingConditionController
	I0412 20:04:53.817876       1 establishing_controller.go:73] Starting EstablishingController
	I0412 20:04:53.817895       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
	I0412 20:04:53.821058       1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0412 20:04:53.886711       1 cache.go:39] Caches are synced for autoregister controller
	I0412 20:04:53.888960       1 shared_informer.go:204] Caches are synced for crd-autoregister 
	I0412 20:04:53.894066       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 20:04:53.912646       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:04:54.785212       1 controller.go:107] OpenAPI AggregationController: Processing item 
	I0412 20:04:54.785323       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:04:54.785532       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:04:54.981976       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I0412 20:04:54.989210       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:04:54.989520       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0412 20:04:55.602026       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:04:56.835537       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:04:57.115593       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0412 20:04:57.408794       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0412 20:04:57.409902       1 controller.go:606] quota admission added evaluator for: endpoints
	I0412 20:04:58.035069       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0412 20:04:58.723065       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0412 20:04:59.062703       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0412 20:05:14.419802       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:05:14.457130       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0412 20:05:14.798379       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1] <==
	* I0412 20:05:14.416160       1 shared_informer.go:204] Caches are synced for daemon sets 
	I0412 20:05:14.416404       1 shared_informer.go:204] Caches are synced for persistent volume 
	I0412 20:05:14.416449       1 shared_informer.go:204] Caches are synced for GC 
	I0412 20:05:14.416458       1 shared_informer.go:204] Caches are synced for stateful set 
	I0412 20:05:14.420747       1 shared_informer.go:204] Caches are synced for namespace 
	I0412 20:05:14.446207       1 log.go:172] [INFO] signed certificate with serial number 553674720293122649670790457411009586856850398380
	I0412 20:05:14.452389       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d91a3f48-91ea-4047-96eb-febc4fd5896f", APIVersion:"apps/v1", ResourceVersion:"198", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-nt4pk
	I0412 20:05:14.453892       1 event.go:255] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"58fcbd78-08ad-4c23-81c3-6b4bc4796f4f", APIVersion:"apps/v1", ResourceVersion:"208", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xxqjk
	E0412 20:05:14.485627       1 daemon_controller.go:302] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"d91a3f48-91ea-4047-96eb-febc4fd5896f", ResourceVersion:"198", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63785390699, loc:(*time.Location)(0x7776000)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0014eb6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Names
pace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeS
ource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001683ec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014eb700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolu
meSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIV
olumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc0014eb720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.A
zureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.16.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc0014eb760)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMo
de)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc0017e04b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0016e8778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServic
eAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00168ede0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy
{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00099e7e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0016e87b8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0412 20:05:14.499652       1 shared_informer.go:204] Caches are synced for cidrallocator 
	I0412 20:05:14.512453       1 range_allocator.go:359] Set node old-k8s-version-20220412200421-42006 PodCIDR to [10.244.0.0/24]
	I0412 20:05:14.581829       1 shared_informer.go:204] Caches are synced for HPA 
	I0412 20:05:14.766250       1 shared_informer.go:204] Caches are synced for ReplicaSet 
	I0412 20:05:14.796326       1 shared_informer.go:204] Caches are synced for deployment 
	I0412 20:05:14.802095       1 event.go:255] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"c3850259-9414-497e-b19b-05b488cd9753", APIVersion:"apps/v1", ResourceVersion:"336", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 1
	I0412 20:05:14.808727       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"1497655a-7413-453d-bf35-8edfda600b44", APIVersion:"apps/v1", ResourceVersion:"337", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-z6lnj
	I0412 20:05:14.815644       1 shared_informer.go:204] Caches are synced for disruption 
	I0412 20:05:14.815672       1 disruption.go:341] Sending events to api server.
	I0412 20:05:14.882180       1 shared_informer.go:204] Caches are synced for resource quota 
	I0412 20:05:14.920223       1 shared_informer.go:204] Caches are synced for garbage collector 
	I0412 20:05:14.920251       1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:05:14.920729       1 shared_informer.go:204] Caches are synced for resource quota 
	I0412 20:05:14.978270       1 shared_informer.go:204] Caches are synced for job 
	I0412 20:05:15.817797       1 shared_informer.go:197] Waiting for caches to sync for garbage collector
	I0412 20:05:15.924972       1 shared_informer.go:204] Caches are synced for garbage collector 
	
	* 
	* ==> kube-proxy [d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b] <==
	* W0412 20:05:15.109854       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0412 20:05:15.118694       1 node.go:135] Successfully retrieved node IP: 192.168.67.2
	I0412 20:05:15.118739       1 server_others.go:149] Using iptables Proxier.
	I0412 20:05:15.119285       1 server.go:529] Version: v1.16.0
	I0412 20:05:15.119941       1 config.go:131] Starting endpoints config controller
	I0412 20:05:15.119963       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0412 20:05:15.119997       1 config.go:313] Starting service config controller
	I0412 20:05:15.120007       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0412 20:05:15.220204       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0412 20:05:15.220290       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133] <==
	* I0412 20:04:53.828463       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0412 20:04:53.829174       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E0412 20:04:53.893487       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:04:53.893757       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:53.893903       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:04:53.895116       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:53.895227       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:04:53.895262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:04:53.896417       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:04:53.896583       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:04:53.898962       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:04:53.899567       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:04:53.899864       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:04:54.895250       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:04:54.898563       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:54.899824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:04:54.900936       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:04:54.909762       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:04:54.911797       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:04:54.914318       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:04:54.915374       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:04:54.916368       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:04:54.923327       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:04:54.982883       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:05:14.813397       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:04:30 UTC, end at Tue 2022-04-12 20:17:21 UTC. --
	Apr 12 20:15:33 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:33.875100     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:38 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:38.876035     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:43 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:43.876805     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:48 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:48.877490     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:53 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:53.878412     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:15:58 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:15:58.879255     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:03.880112     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:08 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:08.880914     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:13.881695     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:18 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:18.882522     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:23 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:23.883236     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:28 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:28.883959     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:33 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:33.884767     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:34 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:34.688870     896 pod_workers.go:191] Error syncing pod 306e6dc0-594c-4013-acc5-0fcbdf38806f ("kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"
	Apr 12 20:16:38 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:38.885578     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:43 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:43.886347     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:48 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:48.887056     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:49 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:49.673634     896 pod_workers.go:191] Error syncing pod 306e6dc0-594c-4013-acc5-0fcbdf38806f ("kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"
	Apr 12 20:16:53 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:53.887920     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:16:58 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:16:58.888739     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:03.673654     896 pod_workers.go:191] Error syncing pod 306e6dc0-594c-4013-acc5-0fcbdf38806f ("kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-xxqjk_kube-system(306e6dc0-594c-4013-acc5-0fcbdf38806f)"
	Apr 12 20:17:03 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:03.889536     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:08 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:08.890240     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:13 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:13.891084     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:17:18 old-k8s-version-20220412200421-42006 kubelet[896]: E0412 20:17:18.891960     896 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-5644d7b6d9-z6lnj storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe pod busybox coredns-5644d7b6d9-z6lnj storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 describe pod busybox coredns-5644d7b6d9-z6lnj storage-provisioner: exit status 1 (59.432508ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b5lb8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  default-token-b5lb8:
	    Type:        Secret (a volume populated by a Secret)
	    SecretName:  default-token-b5lb8
	    Optional:    false
	QoS Class:       BestEffort
	Node-Selectors:  <none>
	Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                    From               Message
	  ----     ------            ----                   ----               -------
	  Warning  FailedScheduling  8m4s                   default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
	  Warning  FailedScheduling  5m28s (x1 over 6m58s)  default-scheduler  0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-z6lnj" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220412200421-42006 describe pod busybox coredns-5644d7b6d9-z6lnj storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (484.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (484.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [0e1c0bc6-cd03-459d-824b-ec843300878d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:180: ***** TestStartStop/group/embed-certs/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
start_stop_delete_test.go:180: TestStartStop/group/embed-certs/serial/DeployApp: showing logs for failed pods as of 2022-04-12 20:18:10.345898178 +0000 UTC m=+3469.322196590
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe po busybox -n default
start_stop_delete_test.go:180: (dbg) kubectl --context embed-certs-20220412200510-42006 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdqq8 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-mdqq8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age                 From               Message
----     ------            ----                ----               -------
Warning  FailedScheduling  45s (x8 over 8m1s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 logs busybox -n default
start_stop_delete_test.go:180: (dbg) kubectl --context embed-certs-20220412200510-42006 logs busybox -n default:
start_stop_delete_test.go:180: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220412200510-42006
helpers_test.go:235: (dbg) docker inspect embed-certs-20220412200510-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7",
	        "Created": "2022-04-12T20:05:23.305199436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:05:24.124628513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hosts",
	        "LogPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7-json.log",
	        "Name": "/embed-certs-20220412200510-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220412200510-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220412200510-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220412200510-42006",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220412200510-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220412200510-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfc6cecb94535d9fe135b877fee8b93f35d43a7969a073acac3b2c920f4dbb93",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cfc6cecb9453",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220412200510-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "340eb3625ebd",
	                        "embed-certs-20220412200510-42006"
	                    ],
	                    "NetworkID": "4ace6a0fae231d855dc7c20348778126fda239556e97939a30b4df667ae930f8",
	                    "EndpointID": "c940297a63e2c35df1a11c0d38d5e5fab82464350b8665dcb6e65be5ac8cc428",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25: (1.050572488s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:38 UTC | Tue, 12 Apr 2022 20:12:02 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                 |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                 |         |         |                               |                               |
	|         | --driver=docker                                            |                                                 |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:20 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:21 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:22 UTC | Tue, 12 Apr 2022 20:12:23 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:24 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006      | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                                 |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:17:29
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:17:29.197380  289404 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:17:29.197556  289404 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:17:29.197567  289404 out.go:310] Setting ErrFile to fd 2...
	I0412 20:17:29.197574  289404 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:17:29.197697  289404 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:17:29.198001  289404 out.go:304] Setting JSON to false
	I0412 20:17:29.199693  289404 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10802,"bootTime":1649783847,"procs":690,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:17:29.199774  289404 start.go:125] virtualization: kvm guest
	I0412 20:17:29.202751  289404 out.go:176] * [old-k8s-version-20220412200421-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:17:29.204680  289404 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:17:29.202936  289404 notify.go:193] Checking for updates...
	I0412 20:17:29.206545  289404 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:17:29.208334  289404 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:17:29.210033  289404 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:17:29.211681  289404 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:17:29.212186  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:17:29.214567  289404 out.go:176] * Kubernetes 1.23.5 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.5
	I0412 20:17:29.214664  289404 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:17:29.257552  289404 docker.go:137] docker version: linux-20.10.14
	I0412 20:17:29.257664  289404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:17:29.358882  289404 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:17:29.289676597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:17:29.359016  289404 docker.go:254] overlay module found
	I0412 20:17:29.361664  289404 out.go:176] * Using the docker driver based on existing profile
	I0412 20:17:29.361689  289404 start.go:284] selected driver: docker
	I0412 20:17:29.361695  289404 start.go:801] validating driver "docker" against &{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:29.361823  289404 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:17:29.361867  289404 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:17:29.361884  289404 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:17:29.363683  289404 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:17:29.364314  289404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:17:29.462530  289404 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:17:29.395046244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:17:29.462681  289404 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:17:29.462711  289404 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:17:29.464919  289404 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:17:29.465031  289404 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:17:29.465059  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:29.465068  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:29.465090  289404 start_flags.go:306] config:
	{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:29.467276  289404 out.go:176] * Starting control plane node old-k8s-version-20220412200421-42006 in cluster old-k8s-version-20220412200421-42006
	I0412 20:17:29.467306  289404 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:17:29.468855  289404 out.go:176] * Pulling base image ...
	I0412 20:17:29.468883  289404 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:17:29.468914  289404 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:17:29.468919  289404 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:17:29.469037  289404 cache.go:57] Caching tarball of preloaded images
	I0412 20:17:29.469329  289404 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:17:29.469377  289404 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0412 20:17:29.469540  289404 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:17:29.515418  289404 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:17:29.515453  289404 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:17:29.515475  289404 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:17:29.515513  289404 start.go:352] acquiring machines lock for old-k8s-version-20220412200421-42006: {Name:mk51335e8aecb7357290fc27d80d48b525f2bff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:17:29.515623  289404 start.go:356] acquired machines lock for "old-k8s-version-20220412200421-42006" in 87.128µs
	I0412 20:17:29.515653  289404 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:17:29.515665  289404 fix.go:55] fixHost starting: 
	I0412 20:17:29.515986  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:17:29.551090  289404 fix.go:103] recreateIfNeeded on old-k8s-version-20220412200421-42006: state=Stopped err=<nil>
	W0412 20:17:29.551126  289404 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:17:29.554026  289404 out.go:176] * Restarting existing docker container for "old-k8s-version-20220412200421-42006" ...
	I0412 20:17:29.554110  289404 cli_runner.go:164] Run: docker start old-k8s-version-20220412200421-42006
	I0412 20:17:29.948290  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:17:29.983637  289404 kic.go:416] container "old-k8s-version-20220412200421-42006" state is running.
	I0412 20:17:29.984024  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:30.018880  289404 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:17:30.019121  289404 machine.go:88] provisioning docker machine ...
	I0412 20:17:30.019150  289404 ubuntu.go:169] provisioning hostname "old-k8s-version-20220412200421-42006"
	I0412 20:17:30.019209  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:30.056483  289404 main.go:134] libmachine: Using SSH client type: native
	I0412 20:17:30.056726  289404 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0412 20:17:30.056753  289404 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220412200421-42006 && echo "old-k8s-version-20220412200421-42006" | sudo tee /etc/hostname
	I0412 20:17:30.057485  289404 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50282->127.0.0.1:49427: read: connection reset by peer
	I0412 20:17:33.190100  289404 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220412200421-42006
	
	I0412 20:17:33.190188  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.225478  289404 main.go:134] libmachine: Using SSH client type: native
	I0412 20:17:33.225643  289404 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0412 20:17:33.225665  289404 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220412200421-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220412200421-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220412200421-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:17:33.344395  289404 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:17:33.344433  289404 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:17:33.344501  289404 ubuntu.go:177] setting up certificates
	I0412 20:17:33.344513  289404 provision.go:83] configureAuth start
	I0412 20:17:33.344580  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:33.379393  289404 provision.go:138] copyHostCerts
	I0412 20:17:33.379467  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:17:33.379479  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:17:33.379543  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:17:33.379687  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:17:33.379705  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:17:33.379735  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:17:33.379802  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:17:33.379810  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:17:33.379832  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:17:33.379899  289404 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220412200421-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220412200421-42006]
	I0412 20:17:33.613592  289404 provision.go:172] copyRemoteCerts
	I0412 20:17:33.613653  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:17:33.613694  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.650564  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:33.739873  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:17:33.758647  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0412 20:17:33.776884  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 20:17:33.794757  289404 provision.go:86] duration metric: configureAuth took 450.228367ms
	I0412 20:17:33.794785  289404 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:17:33.794975  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:17:33.794989  289404 machine.go:91] provisioned docker machine in 3.775852896s
	I0412 20:17:33.794997  289404 start.go:306] post-start starting for "old-k8s-version-20220412200421-42006" (driver="docker")
	I0412 20:17:33.795005  289404 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:17:33.795058  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:17:33.795106  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.828573  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:33.915698  289404 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:17:33.918851  289404 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:17:33.918873  289404 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:17:33.918893  289404 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:17:33.918900  289404 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:17:33.918911  289404 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:17:33.918969  289404 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:17:33.919030  289404 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:17:33.919114  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:17:33.926132  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:17:33.943473  289404 start.go:309] post-start completed in 148.459431ms
	I0412 20:17:33.943559  289404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:17:33.943611  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.979296  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.068745  289404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:17:34.072931  289404 fix.go:57] fixHost completed within 4.557261996s
	I0412 20:17:34.072964  289404 start.go:81] releasing machines lock for "old-k8s-version-20220412200421-42006", held for 4.557323673s
	I0412 20:17:34.073067  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:34.108785  289404 ssh_runner.go:195] Run: systemctl --version
	I0412 20:17:34.108829  289404 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:17:34.108852  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:34.108889  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:34.147630  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.147961  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.232522  289404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:17:34.259820  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:17:34.270552  289404 docker.go:183] disabling docker service ...
	I0412 20:17:34.270627  289404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:17:34.281466  289404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:17:34.291898  289404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:17:34.372403  289404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:17:34.452290  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:17:34.462444  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:17:34.475927  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:17:34.489911  289404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:17:34.497073  289404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:17:34.504299  289404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:17:34.584100  289404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:17:34.657988  289404 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:17:34.658055  289404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:17:34.661997  289404 start.go:462] Will wait 60s for crictl version
	I0412 20:17:34.662052  289404 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:17:34.688749  289404 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:17:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:17:45.736377  289404 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:17:45.764253  289404 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:17:45.764317  289404 ssh_runner.go:195] Run: containerd --version
	I0412 20:17:45.788116  289404 ssh_runner.go:195] Run: containerd --version
	I0412 20:17:45.813804  289404 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0412 20:17:45.813902  289404 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220412200421-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:17:45.850078  289404 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0412 20:17:45.853619  289404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:17:45.866312  289404 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:17:45.866409  289404 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:17:45.866484  289404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:17:45.891403  289404 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:17:45.891432  289404 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:17:45.891488  289404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:17:45.917465  289404 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:17:45.917491  289404 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:17:45.917536  289404 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:17:45.942935  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:45.942975  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:45.942995  289404 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:17:45.943016  289404 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220412200421-42006 NodeName:old-k8s-version-20220412200421-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:17:45.943146  289404 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220412200421-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220412200421-42006
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:17:45.943244  289404 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220412200421-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:17:45.943306  289404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0412 20:17:45.951356  289404 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:17:45.951429  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:17:45.959142  289404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0412 20:17:45.973290  289404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:17:45.987363  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0412 20:17:46.000890  289404 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:17:46.003861  289404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:17:46.013912  289404 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006 for IP: 192.168.67.2
	I0412 20:17:46.014036  289404 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:17:46.014072  289404 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:17:46.014139  289404 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.key
	I0412 20:17:46.014193  289404 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e
	I0412 20:17:46.014227  289404 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key
	I0412 20:17:46.014315  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:17:46.014376  289404 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:17:46.014389  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:17:46.014416  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:17:46.014441  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:17:46.014463  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:17:46.014502  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:17:46.015054  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:17:46.033250  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:17:46.051612  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:17:46.069438  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:17:46.087429  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:17:46.106400  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:17:46.126331  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:17:46.144926  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:17:46.163659  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:17:46.182405  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:17:46.201225  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:17:46.220095  289404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:17:46.233532  289404 ssh_runner.go:195] Run: openssl version
	I0412 20:17:46.238551  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:17:46.246882  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.250144  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.250198  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.255293  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:17:46.263296  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:17:46.271317  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.274644  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.274711  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.279819  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:17:46.287252  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:17:46.295001  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.298255  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.298337  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.303307  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:17:46.310562  289404 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:46.310692  289404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:17:46.310766  289404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:17:46.336676  289404 cri.go:87] found id: "1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1"
	I0412 20:17:46.336702  289404 cri.go:87] found id: "f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4"
	I0412 20:17:46.336709  289404 cri.go:87] found id: "d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b"
	I0412 20:17:46.336718  289404 cri.go:87] found id: "6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133"
	I0412 20:17:46.336726  289404 cri.go:87] found id: "e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db"
	I0412 20:17:46.336732  289404 cri.go:87] found id: "f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5"
	I0412 20:17:46.336737  289404 cri.go:87] found id: "e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1"
	I0412 20:17:46.336743  289404 cri.go:87] found id: ""
	I0412 20:17:46.336781  289404 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:17:46.350978  289404 cri.go:114] JSON = null
	W0412 20:17:46.351029  289404 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0412 20:17:46.351077  289404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:17:46.359069  289404 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:17:46.359093  289404 kubeadm.go:601] restartCluster start
	I0412 20:17:46.359140  289404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:17:46.366326  289404 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.367582  289404 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220412200421-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:17:46.368444  289404 kubeconfig.go:127] "old-k8s-version-20220412200421-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:17:46.369647  289404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:17:46.371957  289404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:17:46.379643  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.379702  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.388397  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.588796  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.588874  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.598135  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.789302  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.789389  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.798209  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.989529  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.989625  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.998886  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.189239  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.189346  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.198862  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.389200  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.389286  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.398241  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.589313  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.589388  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.598198  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.789429  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.789512  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.798393  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.988615  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.988696  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.997702  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.188966  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.189070  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.198201  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.389562  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.389638  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.398668  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.588987  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.589084  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.598056  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.789219  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.789320  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.798195  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.989476  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.989556  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.998331  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.188797  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.188869  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.197864  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.389165  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.389236  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.398385  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.398411  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.398456  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.408292  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.408328  289404 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:17:49.408337  289404 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:17:49.408350  289404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:17:49.408412  289404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:17:49.437804  289404 cri.go:87] found id: "1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1"
	I0412 20:17:49.437833  289404 cri.go:87] found id: "f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4"
	I0412 20:17:49.437841  289404 cri.go:87] found id: "d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b"
	I0412 20:17:49.437847  289404 cri.go:87] found id: "6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133"
	I0412 20:17:49.437853  289404 cri.go:87] found id: "e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db"
	I0412 20:17:49.437859  289404 cri.go:87] found id: "f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5"
	I0412 20:17:49.437864  289404 cri.go:87] found id: "e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1"
	I0412 20:17:49.437870  289404 cri.go:87] found id: ""
	I0412 20:17:49.437875  289404 cri.go:232] Stopping containers: [1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1 f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b 6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133 e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5 e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1]
	I0412 20:17:49.437925  289404 ssh_runner.go:195] Run: which crictl
	I0412 20:17:49.441008  289404 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1 f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b 6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133 e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5 e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1
	I0412 20:17:49.468746  289404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:17:49.479225  289404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:17:49.486664  289404 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Apr 12 20:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Apr 12 20:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Apr 12 20:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Apr 12 20:04 /etc/kubernetes/scheduler.conf
	
	I0412 20:17:49.486737  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:17:49.493537  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:17:49.500633  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:17:49.507803  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:17:49.515027  289404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:17:49.522184  289404 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:17:49.522211  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:49.574062  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.154731  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.308499  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.384584  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.509940  289404 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:17:50.510014  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.020417  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.521045  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.588779  289404 api_server.go:71] duration metric: took 1.078840712s to wait for apiserver process to appear ...
	I0412 20:17:51.588815  289404 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:17:51.588829  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:51.589174  289404 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0412 20:17:52.089936  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:55.386346  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:55.386393  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:55.589672  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:55.679945  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:55.680057  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:56.089538  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:56.094768  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:56.094805  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:56.589444  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:56.594755  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0412 20:17:56.601922  289404 api_server.go:140] control plane version: v1.16.0
	I0412 20:17:56.601948  289404 api_server.go:130] duration metric: took 5.013125628s to wait for apiserver health ...
	I0412 20:17:56.601958  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:56.601965  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:56.604004  289404 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:17:56.604109  289404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:17:56.608013  289404 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:17:56.608039  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:17:56.621855  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:17:56.828475  289404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:17:56.835721  289404 system_pods.go:59] 8 kube-system pods found
	I0412 20:17:56.835755  289404 system_pods.go:61] "coredns-5644d7b6d9-z6lnj" [dac5b00a-e450-4c85-b1dd-54344be79d5a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0412 20:17:56.835762  289404 system_pods.go:61] "etcd-old-k8s-version-20220412200421-42006" [8305edc2-21b5-4258-ad07-8687f7c7f76f] Running
	I0412 20:17:56.835766  289404 system_pods.go:61] "kindnet-xxqjk" [306e6dc0-594c-4013-acc5-0fcbdf38806f] Running
	I0412 20:17:56.835772  289404 system_pods.go:61] "kube-apiserver-old-k8s-version-20220412200421-42006" [bf9e128c-6913-44d5-b0a7-1954fbcbf9bc] Running
	I0412 20:17:56.835776  289404 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220412200421-42006" [7fac424e-5a0c-410f-8d27-6519915d6d2f] Running
	I0412 20:17:56.835780  289404 system_pods.go:61] "kube-proxy-nt4pk" [e0d683c7-40fd-43e1-ac82-a740e53a8513] Running
	I0412 20:17:56.835784  289404 system_pods.go:61] "kube-scheduler-old-k8s-version-20220412200421-42006" [8e70e26b-0e21-40ae-9d51-d1f712a8800c] Running
	I0412 20:17:56.835790  289404 system_pods.go:61] "storage-provisioner" [fc4dc4cd-6bf9-4b27-953d-a654ba5e298a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0412 20:17:56.835795  289404 system_pods.go:74] duration metric: took 7.294557ms to wait for pod list to return data ...
	I0412 20:17:56.835802  289404 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:17:56.838835  289404 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:17:56.838868  289404 node_conditions.go:123] node cpu capacity is 8
	I0412 20:17:56.838886  289404 node_conditions.go:105] duration metric: took 3.076017ms to run NodePressure ...
	I0412 20:17:56.838911  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:57.010809  289404 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:17:57.014381  289404 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0412 20:17:57.378770  289404 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0412 20:17:57.820671  289404 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0412 20:17:58.352826  289404 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0412 20:17:59.137522  289404 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0412 20:18:00.644272  289404 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0412 20:18:01.722982  289404 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0412 20:18:03.598023  289404 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0412 20:18:06.152243  289404 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	45fabe7cb7395       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   9316c5fd3c63b
	99c30d34ba676       3c53fa8541f95       12 minutes ago      Running             kube-proxy                0                   3cb029bb303fd
	1549b6cbd198c       b0c9e5e4dbb14       12 minutes ago      Running             kube-controller-manager   0                   9d0f79bb073ce
	3ecbbe2de190c       3fc1d62d65872       12 minutes ago      Running             kube-apiserver            0                   b911569574c06
	3bb4ed6826e04       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   c8ba1e6aa297c
	e67989f440e43       884d49d6d8c9f       12 minutes ago      Running             kube-scheduler            0                   cae06935f0abb
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:05:24 UTC, end at Tue 2022-04-12 20:18:11 UTC. --
	Apr 12 20:11:29 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:29.228285296Z" level=warning msg="cleaning up after shim disconnected" id=a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c namespace=k8s.io
	Apr 12 20:11:29 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:29.228298872Z" level=info msg="cleaning up dead shim"
	Apr 12 20:11:29 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:29.239504981Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:11:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2412\n"
	Apr 12 20:11:30 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:30.070466327Z" level=info msg="RemoveContainer for \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\""
	Apr 12 20:11:30 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:30.077844555Z" level=info msg="RemoveContainer for \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\" returns successfully"
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.408889590Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.423240601Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\""
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.423913771Z" level=info msg="StartContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\""
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.684240912Z" level=info msg="StartContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\" returns successfully"
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.923976505Z" level=info msg="shim disconnected" id=3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.924044599Z" level=warning msg="cleaning up after shim disconnected" id=3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5 namespace=k8s.io
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.924062670Z" level=info msg="cleaning up dead shim"
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.934397075Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:14:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513\n"
	Apr 12 20:14:24 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:24.379921854Z" level=info msg="RemoveContainer for \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\""
	Apr 12 20:14:24 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:24.385389315Z" level=info msg="RemoveContainer for \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\" returns successfully"
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.408249759Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.420714124Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae\""
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.421165738Z" level=info msg="StartContainer for \"45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae\""
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.584417453Z" level=info msg="StartContainer for \"45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae\" returns successfully"
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.828907260Z" level=info msg="shim disconnected" id=45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.828963844Z" level=warning msg="cleaning up after shim disconnected" id=45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae namespace=k8s.io
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.828978094Z" level=info msg="cleaning up dead shim"
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.839827432Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:17:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2615\n"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:35.709343768Z" level=info msg="RemoveContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\""
	Apr 12 20:17:35 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:35.713973352Z" level=info msg="RemoveContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220412200510-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220412200510-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=embed-certs-20220412200510-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_05_55_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:05:50 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220412200510-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220412200510-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ce1f241f-9ecd-4653-8279-4a97e0fb4c59
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220412200510-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7f7sj                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220412200510-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220412200510-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-6nznr                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220412200510-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed] <==
	* {"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220412200510-42006 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:05:48.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:05:48.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-04-12T20:15:48.637Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":560}
	{"level":"info","ts":"2022-04-12T20:15:48.638Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":560,"took":"498.319µs"}
	
	* 
	* ==> kernel <==
	*  20:18:11 up  3:00,  0 users,  load average: 1.38, 0.96, 1.40
	Linux embed-certs-20220412200510-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930] <==
	* I0412 20:05:51.079090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 20:05:51.079168       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 20:05:51.079317       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 20:05:51.079334       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:05:51.081403       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 20:05:51.951431       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:05:51.956780       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 20:05:51.958625       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:05:51.960721       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:05:51.960740       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 20:05:52.453396       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:05:52.492042       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 20:05:52.622773       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 20:05:52.627636       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0412 20:05:52.628832       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:05:52.632992       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:05:52.692975       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:05:53.108187       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:05:54.258431       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:05:54.266902       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:05:54.281209       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:06:06.703041       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:06:06.802578       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:06:06.802578       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:06:07.429868       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d] <==
	* I0412 20:06:05.965796       1 range_allocator.go:374] Set node embed-certs-20220412200510-42006 PodCIDR to [10.244.0.0/24]
	I0412 20:06:05.965962       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0412 20:06:05.988586       1 shared_informer.go:247] Caches are synced for taint 
	I0412 20:06:05.988690       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0412 20:06:05.988706       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0412 20:06:05.988857       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220412200510-42006. Assuming now as a timestamp.
	I0412 20:06:05.988920       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0412 20:06:05.988871       1 event.go:294] "Event occurred" object="embed-certs-20220412200510-42006" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220412200510-42006 event: Registered Node embed-certs-20220412200510-42006 in Controller"
	I0412 20:06:06.049681       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0412 20:06:06.072407       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0412 20:06:06.100997       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0412 20:06:06.117589       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 20:06:06.117622       1 disruption.go:371] Sending events to api server.
	I0412 20:06:06.155080       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:06:06.158368       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:06:06.555369       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:06:06.555404       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:06:06.586454       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:06:06.705486       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 20:06:06.809151       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6nznr"
	I0412 20:06:06.809239       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7f7sj"
	I0412 20:06:06.951974       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 20:06:06.955212       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gnw47"
	I0412 20:06:06.962832       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-zvglg"
	I0412 20:06:06.997626       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-gnw47"
	
	* 
	* ==> kube-proxy [99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9] <==
	* I0412 20:06:07.392554       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0412 20:06:07.392628       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0412 20:06:07.392660       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:06:07.419205       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:06:07.419245       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:06:07.419257       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:06:07.419297       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:06:07.419807       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:06:07.422063       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:06:07.422089       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:06:07.422928       1 config.go:317] "Starting service config controller"
	I0412 20:06:07.422945       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:06:07.524186       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:06:07.524314       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0] <==
	* E0412 20:05:51.099919       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 20:05:51.099933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:05:51.099991       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.099995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:05:51.100017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.100045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:05:51.928224       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 20:05:51.928267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0412 20:05:51.928229       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.928294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.981180       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.981262       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.982338       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0412 20:05:51.982383       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0412 20:05:52.070012       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:52.070085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:52.082539       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:05:52.082581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:05:52.109222       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:05:52.109254       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:05:52.121424       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:05:52.121458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0412 20:05:52.211687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:05:52.211733       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0412 20:05:54.188758       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:05:24 UTC, end at Tue 2022-04-12 20:18:11 UTC. --
	Apr 12 20:16:44 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:44.773837    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:49.774599    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:54 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:54.776115    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:59 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:59.777612    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:04 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:04.779194    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:09 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:09.780176    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:14 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:14.780858    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:19 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:19.782516    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:24 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:24.783193    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:29 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:29.784293    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:34 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:34.785573    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:17:35.708155    1305 scope.go:110] "RemoveContainer" containerID="3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:17:35.708537    1305 scope.go:110] "RemoveContainer" containerID="45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:35.708895    1305 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7f7sj_kube-system(059bb69b-b8de-4f71-85b1-8d7391491598)\"" pod="kube-system/kindnet-7f7sj" podUID=059bb69b-b8de-4f71-85b1-8d7391491598
	Apr 12 20:17:39 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:39.786303    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:44 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:44.786993    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:49 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:17:49.405605    1305 scope.go:110] "RemoveContainer" containerID="45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	Apr 12 20:17:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:49.406218    1305 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7f7sj_kube-system(059bb69b-b8de-4f71-85b1-8d7391491598)\"" pod="kube-system/kindnet-7f7sj" podUID=059bb69b-b8de-4f71-85b1-8d7391491598
	Apr 12 20:17:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:49.788359    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:54 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:54.789689    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:59 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:59.791113    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:18:00 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:18:00.405123    1305 scope.go:110] "RemoveContainer" containerID="45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	Apr 12 20:18:00 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:18:00.405413    1305 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7f7sj_kube-system(059bb69b-b8de-4f71-85b1-8d7391491598)\"" pod="kube-system/kindnet-7f7sj" podUID=059bb69b-b8de-4f71-85b1-8d7391491598
	Apr 12 20:18:04 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:18:04.792676    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:18:09 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:18:09.794126    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-zvglg storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe pod busybox coredns-64897985d-zvglg storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 describe pod busybox coredns-64897985d-zvglg storage-provisioner: exit status 1 (61.001511ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdqq8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mdqq8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  47s (x8 over 8m3s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-zvglg" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220412200510-42006 describe pod busybox coredns-64897985d-zvglg storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220412200510-42006
helpers_test.go:235: (dbg) docker inspect embed-certs-20220412200510-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7",
	        "Created": "2022-04-12T20:05:23.305199436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 257029,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:05:24.124628513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hosts",
	        "LogPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7-json.log",
	        "Name": "/embed-certs-20220412200510-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220412200510-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220412200510-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220412200510-42006",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220412200510-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220412200510-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cfc6cecb94535d9fe135b877fee8b93f35d43a7969a073acac3b2c920f4dbb93",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49401"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49398"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49399"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cfc6cecb9453",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220412200510-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "340eb3625ebd",
	                        "embed-certs-20220412200510-42006"
	                    ],
	                    "NetworkID": "4ace6a0fae231d855dc7c20348778126fda239556e97939a30b4df667ae930f8",
	                    "EndpointID": "c940297a63e2c35df1a11c0d38d5e5fab82464350b8665dcb6e65be5ac8cc428",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:20 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:21 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:22 UTC | Tue, 12 Apr 2022 20:12:23 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:24 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006      | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                                 |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:17:29
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:17:29.197380  289404 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:17:29.197556  289404 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:17:29.197567  289404 out.go:310] Setting ErrFile to fd 2...
	I0412 20:17:29.197574  289404 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:17:29.197697  289404 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:17:29.198001  289404 out.go:304] Setting JSON to false
	I0412 20:17:29.199693  289404 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10802,"bootTime":1649783847,"procs":690,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:17:29.199774  289404 start.go:125] virtualization: kvm guest
	I0412 20:17:29.202751  289404 out.go:176] * [old-k8s-version-20220412200421-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:17:29.204680  289404 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:17:29.202936  289404 notify.go:193] Checking for updates...
	I0412 20:17:29.206545  289404 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:17:29.208334  289404 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:17:29.210033  289404 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:17:29.211681  289404 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:17:29.212186  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:17:29.214567  289404 out.go:176] * Kubernetes 1.23.5 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.5
	I0412 20:17:29.214664  289404 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:17:29.257552  289404 docker.go:137] docker version: linux-20.10.14
	I0412 20:17:29.257664  289404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:17:29.358882  289404 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:17:29.289676597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:17:29.359016  289404 docker.go:254] overlay module found
	I0412 20:17:29.361664  289404 out.go:176] * Using the docker driver based on existing profile
	I0412 20:17:29.361689  289404 start.go:284] selected driver: docker
	I0412 20:17:29.361695  289404 start.go:801] validating driver "docker" against &{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:29.361823  289404 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:17:29.361867  289404 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:17:29.361884  289404 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:17:29.363683  289404 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:17:29.364314  289404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:17:29.462530  289404 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:17:29.395046244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:17:29.462681  289404 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:17:29.462711  289404 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:17:29.464919  289404 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:17:29.465031  289404 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:17:29.465059  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:29.465068  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:29.465090  289404 start_flags.go:306] config:
	{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:29.467276  289404 out.go:176] * Starting control plane node old-k8s-version-20220412200421-42006 in cluster old-k8s-version-20220412200421-42006
	I0412 20:17:29.467306  289404 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:17:29.468855  289404 out.go:176] * Pulling base image ...
	I0412 20:17:29.468883  289404 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:17:29.468914  289404 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:17:29.468919  289404 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:17:29.469037  289404 cache.go:57] Caching tarball of preloaded images
	I0412 20:17:29.469329  289404 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:17:29.469377  289404 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0412 20:17:29.469540  289404 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:17:29.515418  289404 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:17:29.515453  289404 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:17:29.515475  289404 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:17:29.515513  289404 start.go:352] acquiring machines lock for old-k8s-version-20220412200421-42006: {Name:mk51335e8aecb7357290fc27d80d48b525f2bff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:17:29.515623  289404 start.go:356] acquired machines lock for "old-k8s-version-20220412200421-42006" in 87.128µs
	I0412 20:17:29.515653  289404 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:17:29.515665  289404 fix.go:55] fixHost starting: 
	I0412 20:17:29.515986  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:17:29.551090  289404 fix.go:103] recreateIfNeeded on old-k8s-version-20220412200421-42006: state=Stopped err=<nil>
	W0412 20:17:29.551126  289404 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:17:29.554026  289404 out.go:176] * Restarting existing docker container for "old-k8s-version-20220412200421-42006" ...
	I0412 20:17:29.554110  289404 cli_runner.go:164] Run: docker start old-k8s-version-20220412200421-42006
	I0412 20:17:29.948290  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:17:29.983637  289404 kic.go:416] container "old-k8s-version-20220412200421-42006" state is running.
	I0412 20:17:29.984024  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:30.018880  289404 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:17:30.019121  289404 machine.go:88] provisioning docker machine ...
	I0412 20:17:30.019150  289404 ubuntu.go:169] provisioning hostname "old-k8s-version-20220412200421-42006"
	I0412 20:17:30.019209  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:30.056483  289404 main.go:134] libmachine: Using SSH client type: native
	I0412 20:17:30.056726  289404 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0412 20:17:30.056753  289404 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220412200421-42006 && echo "old-k8s-version-20220412200421-42006" | sudo tee /etc/hostname
	I0412 20:17:30.057485  289404 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50282->127.0.0.1:49427: read: connection reset by peer
	I0412 20:17:33.190100  289404 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220412200421-42006
	
	I0412 20:17:33.190188  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.225478  289404 main.go:134] libmachine: Using SSH client type: native
	I0412 20:17:33.225643  289404 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0412 20:17:33.225665  289404 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220412200421-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220412200421-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220412200421-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:17:33.344395  289404 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:17:33.344433  289404 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:17:33.344501  289404 ubuntu.go:177] setting up certificates
	I0412 20:17:33.344513  289404 provision.go:83] configureAuth start
	I0412 20:17:33.344580  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:33.379393  289404 provision.go:138] copyHostCerts
	I0412 20:17:33.379467  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:17:33.379479  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:17:33.379543  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:17:33.379687  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:17:33.379705  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:17:33.379735  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:17:33.379802  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:17:33.379810  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:17:33.379832  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:17:33.379899  289404 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220412200421-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220412200421-42006]
	I0412 20:17:33.613592  289404 provision.go:172] copyRemoteCerts
	I0412 20:17:33.613653  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:17:33.613694  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.650564  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:33.739873  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:17:33.758647  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0412 20:17:33.776884  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 20:17:33.794757  289404 provision.go:86] duration metric: configureAuth took 450.228367ms
	I0412 20:17:33.794785  289404 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:17:33.794975  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:17:33.794989  289404 machine.go:91] provisioned docker machine in 3.775852896s
	I0412 20:17:33.794997  289404 start.go:306] post-start starting for "old-k8s-version-20220412200421-42006" (driver="docker")
	I0412 20:17:33.795005  289404 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:17:33.795058  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:17:33.795106  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.828573  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:33.915698  289404 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:17:33.918851  289404 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:17:33.918873  289404 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:17:33.918893  289404 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:17:33.918900  289404 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:17:33.918911  289404 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:17:33.918969  289404 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:17:33.919030  289404 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:17:33.919114  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:17:33.926132  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:17:33.943473  289404 start.go:309] post-start completed in 148.459431ms
	I0412 20:17:33.943559  289404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:17:33.943611  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.979296  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.068745  289404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:17:34.072931  289404 fix.go:57] fixHost completed within 4.557261996s
	I0412 20:17:34.072964  289404 start.go:81] releasing machines lock for "old-k8s-version-20220412200421-42006", held for 4.557323673s
	I0412 20:17:34.073067  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:34.108785  289404 ssh_runner.go:195] Run: systemctl --version
	I0412 20:17:34.108829  289404 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:17:34.108852  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:34.108889  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:34.147630  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.147961  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.232522  289404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:17:34.259820  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:17:34.270552  289404 docker.go:183] disabling docker service ...
	I0412 20:17:34.270627  289404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:17:34.281466  289404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:17:34.291898  289404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:17:34.372403  289404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:17:34.452290  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:17:34.462444  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:17:34.475927  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:17:34.489911  289404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:17:34.497073  289404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:17:34.504299  289404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:17:34.584100  289404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:17:34.657988  289404 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:17:34.658055  289404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:17:34.661997  289404 start.go:462] Will wait 60s for crictl version
	I0412 20:17:34.662052  289404 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:17:34.688749  289404 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:17:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:17:45.736377  289404 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:17:45.764253  289404 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:17:45.764317  289404 ssh_runner.go:195] Run: containerd --version
	I0412 20:17:45.788116  289404 ssh_runner.go:195] Run: containerd --version
	I0412 20:17:45.813804  289404 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0412 20:17:45.813902  289404 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220412200421-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:17:45.850078  289404 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0412 20:17:45.853619  289404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:17:45.866312  289404 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:17:45.866409  289404 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:17:45.866484  289404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:17:45.891403  289404 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:17:45.891432  289404 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:17:45.891488  289404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:17:45.917465  289404 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:17:45.917491  289404 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:17:45.917536  289404 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:17:45.942935  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:45.942975  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:45.942995  289404 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:17:45.943016  289404 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220412200421-42006 NodeName:old-k8s-version-20220412200421-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:17:45.943146  289404 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220412200421-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220412200421-42006
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:17:45.943244  289404 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220412200421-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:17:45.943306  289404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0412 20:17:45.951356  289404 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:17:45.951429  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:17:45.959142  289404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0412 20:17:45.973290  289404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:17:45.987363  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0412 20:17:46.000890  289404 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:17:46.003861  289404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:17:46.013912  289404 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006 for IP: 192.168.67.2
	I0412 20:17:46.014036  289404 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:17:46.014072  289404 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:17:46.014139  289404 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.key
	I0412 20:17:46.014193  289404 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e
	I0412 20:17:46.014227  289404 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key
	I0412 20:17:46.014315  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:17:46.014376  289404 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:17:46.014389  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:17:46.014416  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:17:46.014441  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:17:46.014463  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:17:46.014502  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:17:46.015054  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:17:46.033250  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:17:46.051612  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:17:46.069438  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:17:46.087429  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:17:46.106400  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:17:46.126331  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:17:46.144926  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:17:46.163659  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:17:46.182405  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:17:46.201225  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:17:46.220095  289404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:17:46.233532  289404 ssh_runner.go:195] Run: openssl version
	I0412 20:17:46.238551  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:17:46.246882  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.250144  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.250198  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.255293  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:17:46.263296  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:17:46.271317  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.274644  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.274711  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.279819  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:17:46.287252  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:17:46.295001  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.298255  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.298337  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.303307  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:17:46.310562  289404 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:46.310692  289404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:17:46.310766  289404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:17:46.336676  289404 cri.go:87] found id: "1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1"
	I0412 20:17:46.336702  289404 cri.go:87] found id: "f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4"
	I0412 20:17:46.336709  289404 cri.go:87] found id: "d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b"
	I0412 20:17:46.336718  289404 cri.go:87] found id: "6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133"
	I0412 20:17:46.336726  289404 cri.go:87] found id: "e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db"
	I0412 20:17:46.336732  289404 cri.go:87] found id: "f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5"
	I0412 20:17:46.336737  289404 cri.go:87] found id: "e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1"
	I0412 20:17:46.336743  289404 cri.go:87] found id: ""
	I0412 20:17:46.336781  289404 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:17:46.350978  289404 cri.go:114] JSON = null
	W0412 20:17:46.351029  289404 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0412 20:17:46.351077  289404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:17:46.359069  289404 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:17:46.359093  289404 kubeadm.go:601] restartCluster start
	I0412 20:17:46.359140  289404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:17:46.366326  289404 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.367582  289404 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220412200421-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:17:46.368444  289404 kubeconfig.go:127] "old-k8s-version-20220412200421-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:17:46.369647  289404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:17:46.371957  289404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:17:46.379643  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.379702  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.388397  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.588796  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.588874  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.598135  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.789302  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.789389  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.798209  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.989529  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.989625  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.998886  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.189239  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.189346  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.198862  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.389200  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.389286  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.398241  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.589313  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.589388  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.598198  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.789429  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.789512  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.798393  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.988615  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.988696  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.997702  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.188966  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.189070  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.198201  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.389562  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.389638  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.398668  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.588987  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.589084  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.598056  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.789219  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.789320  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.798195  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.989476  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.989556  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.998331  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.188797  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.188869  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.197864  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.389165  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.389236  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.398385  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.398411  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.398456  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.408292  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.408328  289404 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:17:49.408337  289404 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:17:49.408350  289404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:17:49.408412  289404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:17:49.437804  289404 cri.go:87] found id: "1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1"
	I0412 20:17:49.437833  289404 cri.go:87] found id: "f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4"
	I0412 20:17:49.437841  289404 cri.go:87] found id: "d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b"
	I0412 20:17:49.437847  289404 cri.go:87] found id: "6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133"
	I0412 20:17:49.437853  289404 cri.go:87] found id: "e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db"
	I0412 20:17:49.437859  289404 cri.go:87] found id: "f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5"
	I0412 20:17:49.437864  289404 cri.go:87] found id: "e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1"
	I0412 20:17:49.437870  289404 cri.go:87] found id: ""
	I0412 20:17:49.437875  289404 cri.go:232] Stopping containers: [1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1 f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b 6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133 e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5 e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1]
	I0412 20:17:49.437925  289404 ssh_runner.go:195] Run: which crictl
	I0412 20:17:49.441008  289404 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1 f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b 6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133 e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5 e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1
	I0412 20:17:49.468746  289404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:17:49.479225  289404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:17:49.486664  289404 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Apr 12 20:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Apr 12 20:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Apr 12 20:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Apr 12 20:04 /etc/kubernetes/scheduler.conf
	
	I0412 20:17:49.486737  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:17:49.493537  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:17:49.500633  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:17:49.507803  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:17:49.515027  289404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:17:49.522184  289404 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:17:49.522211  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:49.574062  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.154731  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.308499  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.384584  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.509940  289404 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:17:50.510014  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.020417  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.521045  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.588779  289404 api_server.go:71] duration metric: took 1.078840712s to wait for apiserver process to appear ...
	I0412 20:17:51.588815  289404 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:17:51.588829  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:51.589174  289404 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0412 20:17:52.089936  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:55.386346  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:55.386393  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:55.589672  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:55.679945  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:55.680057  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:56.089538  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:56.094768  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:56.094805  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:56.589444  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:56.594755  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0412 20:17:56.601922  289404 api_server.go:140] control plane version: v1.16.0
	I0412 20:17:56.601948  289404 api_server.go:130] duration metric: took 5.013125628s to wait for apiserver health ...
	I0412 20:17:56.601958  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:56.601965  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:56.604004  289404 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:17:56.604109  289404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:17:56.608013  289404 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:17:56.608039  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:17:56.621855  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:17:56.828475  289404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:17:56.835721  289404 system_pods.go:59] 8 kube-system pods found
	I0412 20:17:56.835755  289404 system_pods.go:61] "coredns-5644d7b6d9-z6lnj" [dac5b00a-e450-4c85-b1dd-54344be79d5a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0412 20:17:56.835762  289404 system_pods.go:61] "etcd-old-k8s-version-20220412200421-42006" [8305edc2-21b5-4258-ad07-8687f7c7f76f] Running
	I0412 20:17:56.835766  289404 system_pods.go:61] "kindnet-xxqjk" [306e6dc0-594c-4013-acc5-0fcbdf38806f] Running
	I0412 20:17:56.835772  289404 system_pods.go:61] "kube-apiserver-old-k8s-version-20220412200421-42006" [bf9e128c-6913-44d5-b0a7-1954fbcbf9bc] Running
	I0412 20:17:56.835776  289404 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220412200421-42006" [7fac424e-5a0c-410f-8d27-6519915d6d2f] Running
	I0412 20:17:56.835780  289404 system_pods.go:61] "kube-proxy-nt4pk" [e0d683c7-40fd-43e1-ac82-a740e53a8513] Running
	I0412 20:17:56.835784  289404 system_pods.go:61] "kube-scheduler-old-k8s-version-20220412200421-42006" [8e70e26b-0e21-40ae-9d51-d1f712a8800c] Running
	I0412 20:17:56.835790  289404 system_pods.go:61] "storage-provisioner" [fc4dc4cd-6bf9-4b27-953d-a654ba5e298a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0412 20:17:56.835795  289404 system_pods.go:74] duration metric: took 7.294557ms to wait for pod list to return data ...
	I0412 20:17:56.835802  289404 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:17:56.838835  289404 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:17:56.838868  289404 node_conditions.go:123] node cpu capacity is 8
	I0412 20:17:56.838886  289404 node_conditions.go:105] duration metric: took 3.076017ms to run NodePressure ...
	I0412 20:17:56.838911  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:57.010809  289404 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:17:57.014381  289404 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0412 20:17:57.378770  289404 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0412 20:17:57.820671  289404 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0412 20:17:58.352826  289404 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0412 20:17:59.137522  289404 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0412 20:18:00.644272  289404 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0412 20:18:01.722982  289404 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0412 20:18:03.598023  289404 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0412 20:18:06.152243  289404 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	45fabe7cb7395       6de166512aa22       3 minutes ago       Exited              kindnet-cni               3                   9316c5fd3c63b
	99c30d34ba676       3c53fa8541f95       12 minutes ago      Running             kube-proxy                0                   3cb029bb303fd
	1549b6cbd198c       b0c9e5e4dbb14       12 minutes ago      Running             kube-controller-manager   0                   9d0f79bb073ce
	3ecbbe2de190c       3fc1d62d65872       12 minutes ago      Running             kube-apiserver            0                   b911569574c06
	3bb4ed6826e04       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   c8ba1e6aa297c
	e67989f440e43       884d49d6d8c9f       12 minutes ago      Running             kube-scheduler            0                   cae06935f0abb
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:05:24 UTC, end at Tue 2022-04-12 20:18:13 UTC. --
	Apr 12 20:11:29 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:29.228285296Z" level=warning msg="cleaning up after shim disconnected" id=a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c namespace=k8s.io
	Apr 12 20:11:29 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:29.228298872Z" level=info msg="cleaning up dead shim"
	Apr 12 20:11:29 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:29.239504981Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:11:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2412\n"
	Apr 12 20:11:30 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:30.070466327Z" level=info msg="RemoveContainer for \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\""
	Apr 12 20:11:30 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:30.077844555Z" level=info msg="RemoveContainer for \"9477001e7ee3b30e9f16b66bf87b6b49322c15b624a1e90575725fc4655cc0ba\" returns successfully"
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.408889590Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.423240601Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\""
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.423913771Z" level=info msg="StartContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\""
	Apr 12 20:11:43 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:11:43.684240912Z" level=info msg="StartContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\" returns successfully"
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.923976505Z" level=info msg="shim disconnected" id=3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.924044599Z" level=warning msg="cleaning up after shim disconnected" id=3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5 namespace=k8s.io
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.924062670Z" level=info msg="cleaning up dead shim"
	Apr 12 20:14:23 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:23.934397075Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:14:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2513\n"
	Apr 12 20:14:24 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:24.379921854Z" level=info msg="RemoveContainer for \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\""
	Apr 12 20:14:24 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:24.385389315Z" level=info msg="RemoveContainer for \"a3ab3b09e47d2204acbc8f870d4b903121d2535cbfc5b44e243f42dcffea2f9c\" returns successfully"
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.408249759Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.420714124Z" level=info msg="CreateContainer within sandbox \"9316c5fd3c63b7b246c2411406f65a7f4118e64aad905b71ac46068b5e7e0b84\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae\""
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.421165738Z" level=info msg="StartContainer for \"45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae\""
	Apr 12 20:14:54 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:14:54.584417453Z" level=info msg="StartContainer for \"45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae\" returns successfully"
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.828907260Z" level=info msg="shim disconnected" id=45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.828963844Z" level=warning msg="cleaning up after shim disconnected" id=45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae namespace=k8s.io
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.828978094Z" level=info msg="cleaning up dead shim"
	Apr 12 20:17:34 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:34.839827432Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:17:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2615\n"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:35.709343768Z" level=info msg="RemoveContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\""
	Apr 12 20:17:35 embed-certs-20220412200510-42006 containerd[470]: time="2022-04-12T20:17:35.713973352Z" level=info msg="RemoveContainer for \"3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220412200510-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220412200510-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=embed-certs-20220412200510-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_05_55_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:05:50 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220412200510-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:18:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:16:21 +0000   Tue, 12 Apr 2022 20:05:48 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220412200510-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ce1f241f-9ecd-4653-8279-4a97e0fb4c59
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220412200510-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-7f7sj                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-embed-certs-20220412200510-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-embed-certs-20220412200510-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-6nznr                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-embed-certs-20220412200510-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x4 over 12m)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed] <==
	* {"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:05:48.083Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:05:48.617Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220412200510-42006 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:05:48.619Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:05:48.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:05:48.620Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-04-12T20:15:48.637Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":560}
	{"level":"info","ts":"2022-04-12T20:15:48.638Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":560,"took":"498.319µs"}
	
	* 
	* ==> kernel <==
	*  20:18:13 up  3:00,  0 users,  load average: 1.38, 0.96, 1.40
	Linux embed-certs-20220412200510-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930] <==
	* I0412 20:05:51.079090       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0412 20:05:51.079168       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 20:05:51.079317       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 20:05:51.079334       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:05:51.081403       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 20:05:51.951431       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:05:51.956780       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 20:05:51.958625       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:05:51.960721       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:05:51.960740       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 20:05:52.453396       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:05:52.492042       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 20:05:52.622773       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 20:05:52.627636       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0412 20:05:52.628832       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:05:52.632992       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:05:52.692975       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:05:53.108187       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:05:54.258431       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:05:54.266902       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:05:54.281209       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:06:06.703041       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:06:06.802578       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:06:06.802578       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:06:07.429868       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d] <==
	* I0412 20:06:05.965796       1 range_allocator.go:374] Set node embed-certs-20220412200510-42006 PodCIDR to [10.244.0.0/24]
	I0412 20:06:05.965962       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0412 20:06:05.988586       1 shared_informer.go:247] Caches are synced for taint 
	I0412 20:06:05.988690       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0412 20:06:05.988706       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0412 20:06:05.988857       1 node_lifecycle_controller.go:1012] Missing timestamp for Node embed-certs-20220412200510-42006. Assuming now as a timestamp.
	I0412 20:06:05.988920       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0412 20:06:05.988871       1 event.go:294] "Event occurred" object="embed-certs-20220412200510-42006" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node embed-certs-20220412200510-42006 event: Registered Node embed-certs-20220412200510-42006 in Controller"
	I0412 20:06:06.049681       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0412 20:06:06.072407       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0412 20:06:06.100997       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0412 20:06:06.117589       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 20:06:06.117622       1 disruption.go:371] Sending events to api server.
	I0412 20:06:06.155080       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:06:06.158368       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:06:06.555369       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:06:06.555404       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:06:06.586454       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:06:06.705486       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 20:06:06.809151       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6nznr"
	I0412 20:06:06.809239       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-7f7sj"
	I0412 20:06:06.951974       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 20:06:06.955212       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-gnw47"
	I0412 20:06:06.962832       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-zvglg"
	I0412 20:06:06.997626       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-gnw47"
	
	* 
	* ==> kube-proxy [99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9] <==
	* I0412 20:06:07.392554       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0412 20:06:07.392628       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0412 20:06:07.392660       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:06:07.419205       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:06:07.419245       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:06:07.419257       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:06:07.419297       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:06:07.419807       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:06:07.422063       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:06:07.422089       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:06:07.422928       1 config.go:317] "Starting service config controller"
	I0412 20:06:07.422945       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:06:07.524186       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:06:07.524314       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0] <==
	* E0412 20:05:51.099919       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 20:05:51.099933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:05:51.099991       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.099995       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:05:51.100017       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.100045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:05:51.928224       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0412 20:05:51.928267       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0412 20:05:51.928229       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.928294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.981180       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:51.981262       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:51.982338       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0412 20:05:51.982383       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0412 20:05:52.070012       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:05:52.070085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:05:52.082539       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:05:52.082581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:05:52.109222       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:05:52.109254       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:05:52.121424       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:05:52.121458       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0412 20:05:52.211687       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:05:52.211733       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0412 20:05:54.188758       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:05:24 UTC, end at Tue 2022-04-12 20:18:13 UTC. --
	Apr 12 20:16:44 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:44.773837    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:49.774599    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:54 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:54.776115    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:59 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:16:59.777612    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:04 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:04.779194    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:09 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:09.780176    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:14 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:14.780858    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:19 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:19.782516    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:24 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:24.783193    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:29 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:29.784293    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:34 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:34.785573    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:17:35.708155    1305 scope.go:110] "RemoveContainer" containerID="3e68560b60a91dac8935cf1d5d59e9fd8e103443c002e600103c36dfdeb5eda5"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:17:35.708537    1305 scope.go:110] "RemoveContainer" containerID="45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	Apr 12 20:17:35 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:35.708895    1305 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7f7sj_kube-system(059bb69b-b8de-4f71-85b1-8d7391491598)\"" pod="kube-system/kindnet-7f7sj" podUID=059bb69b-b8de-4f71-85b1-8d7391491598
	Apr 12 20:17:39 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:39.786303    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:44 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:44.786993    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:49 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:17:49.405605    1305 scope.go:110] "RemoveContainer" containerID="45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	Apr 12 20:17:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:49.406218    1305 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7f7sj_kube-system(059bb69b-b8de-4f71-85b1-8d7391491598)\"" pod="kube-system/kindnet-7f7sj" podUID=059bb69b-b8de-4f71-85b1-8d7391491598
	Apr 12 20:17:49 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:49.788359    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:54 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:54.789689    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:59 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:17:59.791113    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:18:00 embed-certs-20220412200510-42006 kubelet[1305]: I0412 20:18:00.405123    1305 scope.go:110] "RemoveContainer" containerID="45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	Apr 12 20:18:00 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:18:00.405413    1305 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-7f7sj_kube-system(059bb69b-b8de-4f71-85b1-8d7391491598)\"" pod="kube-system/kindnet-7f7sj" podUID=059bb69b-b8de-4f71-85b1-8d7391491598
	Apr 12 20:18:04 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:18:04.792676    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:18:09 embed-certs-20220412200510-42006 kubelet[1305]: E0412 20:18:09.794126    1305 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-zvglg storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe pod busybox coredns-64897985d-zvglg storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 describe pod busybox coredns-64897985d-zvglg storage-provisioner: exit status 1 (59.360573ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mdqq8 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-mdqq8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m5s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-zvglg" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220412200510-42006 describe pod busybox coredns-64897985d-zvglg storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (484.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (296.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220412201228-42006 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220412201228-42006 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5: exit status 80 (4m54.288562268s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Using Docker driver with the root privilege
	* Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 20:12:28.414615  273955 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:12:28.414753  273955 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:12:28.414764  273955 out.go:310] Setting ErrFile to fd 2...
	I0412 20:12:28.414771  273955 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:12:28.414887  273955 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:12:28.415207  273955 out.go:304] Setting JSON to false
	I0412 20:12:28.416583  273955 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10502,"bootTime":1649783847,"procs":359,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:12:28.416668  273955 start.go:125] virtualization: kvm guest
	I0412 20:12:28.419729  273955 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:12:28.421910  273955 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:12:28.419947  273955 notify.go:193] Checking for updates...
	I0412 20:12:28.423611  273955 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:12:28.425312  273955 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:12:28.426989  273955 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:12:28.428657  273955 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:12:28.429389  273955 config.go:178] Loaded profile config "bridge-20220412195202-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:12:28.429551  273955 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:12:28.429690  273955 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:12:28.429784  273955 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:12:28.473881  273955 docker.go:137] docker version: linux-20.10.14
	I0412 20:12:28.474000  273955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:12:28.571763  273955 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:12:28.506487008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:12:28.571889  273955 docker.go:254] overlay module found
	I0412 20:12:28.574179  273955 out.go:176] * Using the docker driver based on user configuration
	I0412 20:12:28.574216  273955 start.go:284] selected driver: docker
	I0412 20:12:28.574223  273955 start.go:801] validating driver "docker" against <nil>
	I0412 20:12:28.574247  273955 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:12:28.574294  273955 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:12:28.574316  273955 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:12:28.575702  273955 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:12:28.576414  273955 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:12:28.676440  273955 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:12:28.609963563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:12:28.676591  273955 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 20:12:28.676948  273955 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:12:28.679209  273955 out.go:176] * Using Docker driver with the root privilege
	I0412 20:12:28.679242  273955 cni.go:93] Creating CNI manager for ""
	I0412 20:12:28.679253  273955 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:12:28.679270  273955 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:12:28.679279  273955 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:12:28.679288  273955 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0412 20:12:28.679307  273955 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:12:28.681370  273955 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:12:28.681423  273955 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:12:28.682850  273955 out.go:176] * Pulling base image ...
	I0412 20:12:28.682885  273955 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:12:28.682923  273955 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:12:28.682951  273955 cache.go:57] Caching tarball of preloaded images
	I0412 20:12:28.683016  273955 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:12:28.683224  273955 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:12:28.683244  273955 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:12:28.683372  273955 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:12:28.683400  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json: {Name:mk531de7b88e895a8df78d4b0e44976d2a47dea7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:28.730106  273955 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:12:28.730143  273955 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:12:28.730166  273955 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:12:28.730226  273955 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:12:28.730379  273955 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 129.77µs
	I0412 20:12:28.730412  273955 start.go:91] Provisioning new machine with config: &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-2022041220122
8-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:12:28.730521  273955 start.go:131] createHost starting for "" (driver="docker")
	I0412 20:12:28.732971  273955 out.go:203] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0412 20:12:28.733222  273955 start.go:165] libmachine.API.Create for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:12:28.733261  273955 client.go:168] LocalClient.Create starting
	I0412 20:12:28.733353  273955 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem
	I0412 20:12:28.733392  273955 main.go:134] libmachine: Decoding PEM data...
	I0412 20:12:28.733417  273955 main.go:134] libmachine: Parsing certificate...
	I0412 20:12:28.733490  273955 main.go:134] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem
	I0412 20:12:28.733517  273955 main.go:134] libmachine: Decoding PEM data...
	I0412 20:12:28.733537  273955 main.go:134] libmachine: Parsing certificate...
	I0412 20:12:28.733883  273955 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0412 20:12:28.767853  273955 cli_runner.go:211] docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0412 20:12:28.767938  273955 network_create.go:272] running [docker network inspect default-k8s-different-port-20220412201228-42006] to gather additional debugging logs...
	I0412 20:12:28.767963  273955 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006
	W0412 20:12:28.803886  273955 cli_runner.go:211] docker network inspect default-k8s-different-port-20220412201228-42006 returned with exit code 1
	I0412 20:12:28.803928  273955 network_create.go:275] error running [docker network inspect default-k8s-different-port-20220412201228-42006]: docker network inspect default-k8s-different-port-20220412201228-42006: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-different-port-20220412201228-42006
	I0412 20:12:28.803960  273955 network_create.go:277] output of [docker network inspect default-k8s-different-port-20220412201228-42006]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-different-port-20220412201228-42006
	
	** /stderr **
	I0412 20:12:28.804025  273955 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:12:28.839399  273955 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000722578] misses:0}
	I0412 20:12:28.839456  273955 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0412 20:12:28.839486  273955 network_create.go:115] attempt to create docker network default-k8s-different-port-20220412201228-42006 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0412 20:12:28.839543  273955 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true default-k8s-different-port-20220412201228-42006
	I0412 20:12:28.916721  273955 network_create.go:99] docker network default-k8s-different-port-20220412201228-42006 192.168.49.0/24 created
	I0412 20:12:28.916758  273955 kic.go:106] calculated static IP "192.168.49.2" for the "default-k8s-different-port-20220412201228-42006" container
	I0412 20:12:28.916815  273955 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0412 20:12:28.952553  273955 cli_runner.go:164] Run: docker volume create default-k8s-different-port-20220412201228-42006 --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220412201228-42006 --label created_by.minikube.sigs.k8s.io=true
	I0412 20:12:28.987842  273955 oci.go:103] Successfully created a docker volume default-k8s-different-port-20220412201228-42006
	I0412 20:12:28.987940  273955 cli_runner.go:164] Run: docker run --rm --name default-k8s-different-port-20220412201228-42006-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220412201228-42006 --entrypoint /usr/bin/test -v default-k8s-different-port-20220412201228-42006:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -d /var/lib
	I0412 20:12:29.567950  273955 oci.go:107] Successfully prepared a docker volume default-k8s-different-port-20220412201228-42006
	I0412 20:12:29.568006  273955 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:12:29.568028  273955 kic.go:179] Starting extracting preloaded images to volume ...
	I0412 20:12:29.568105  273955 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220412201228-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir
	I0412 20:12:37.268259  273955 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-different-port-20220412201228-42006:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 -I lz4 -xf /preloaded.tar -C /extractDir: (7.700077951s)
	I0412 20:12:37.268298  273955 kic.go:188] duration metric: took 7.700265 seconds to extract preloaded images to volume
	W0412 20:12:37.268341  273955 oci.go:136] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0412 20:12:37.268353  273955 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0412 20:12:37.268408  273955 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0412 20:12:37.366909  273955 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-different-port-20220412201228-42006 --name default-k8s-different-port-20220412201228-42006 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-different-port-20220412201228-42006 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-different-port-20220412201228-42006 --network default-k8s-different-port-20220412201228-42006 --ip 192.168.49.2 --volume default-k8s-different-port-20220412201228-42006:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
	I0412 20:12:37.813503  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Running}}
	I0412 20:12:37.850526  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:12:37.887730  273955 cli_runner.go:164] Run: docker exec default-k8s-different-port-20220412201228-42006 stat /var/lib/dpkg/alternatives/iptables
	I0412 20:12:37.958096  273955 oci.go:279] the created container "default-k8s-different-port-20220412201228-42006" has a running status.
	I0412 20:12:37.958132  273955 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa...
	I0412 20:12:38.353984  273955 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0412 20:12:38.440267  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:12:38.475034  273955 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0412 20:12:38.475061  273955 kic_runner.go:114] Args: [docker exec --privileged default-k8s-different-port-20220412201228-42006 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0412 20:12:38.580253  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:12:38.615405  273955 machine.go:88] provisioning docker machine ...
	I0412 20:12:38.615450  273955 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:12:38.615514  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:38.649260  273955 main.go:134] libmachine: Using SSH client type: native
	I0412 20:12:38.649512  273955 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0412 20:12:38.649540  273955 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:12:38.781790  273955 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:12:38.781900  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:38.817514  273955 main.go:134] libmachine: Using SSH client type: native
	I0412 20:12:38.817687  273955 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49412 <nil> <nil>}
	I0412 20:12:38.817718  273955 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:12:38.940297  273955 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:12:38.940333  273955 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:12:38.940359  273955 ubuntu.go:177] setting up certificates
	I0412 20:12:38.940371  273955 provision.go:83] configureAuth start
	I0412 20:12:38.940434  273955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:12:38.974058  273955 provision.go:138] copyHostCerts
	I0412 20:12:38.974121  273955 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:12:38.974130  273955 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:12:38.974195  273955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:12:38.974272  273955 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:12:38.974283  273955 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:12:38.974307  273955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:12:38.974367  273955 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:12:38.974380  273955 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:12:38.974402  273955 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:12:38.974460  273955 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:12:39.033397  273955 provision.go:172] copyRemoteCerts
	I0412 20:12:39.033464  273955 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:12:39.033503  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.067443  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:12:39.156339  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:12:39.174718  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:12:39.193937  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 20:12:39.213405  273955 provision.go:86] duration metric: configureAuth took 273.011761ms
	I0412 20:12:39.213437  273955 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:12:39.213610  273955 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:12:39.213623  273955 machine.go:91] provisioned docker machine in 598.193413ms
	I0412 20:12:39.213629  273955 client.go:171] LocalClient.Create took 10.480357305s
	I0412 20:12:39.213645  273955 start.go:173] duration metric: libmachine.API.Create for "default-k8s-different-port-20220412201228-42006" took 10.480425661s
	I0412 20:12:39.213662  273955 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:12:39.213669  273955 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:12:39.213710  273955 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:12:39.213747  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.247304  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:12:39.336107  273955 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:12:39.338855  273955 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:12:39.338881  273955 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:12:39.338893  273955 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:12:39.338900  273955 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:12:39.338910  273955 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:12:39.338962  273955 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:12:39.339033  273955 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:12:39.339113  273955 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:12:39.346063  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:12:39.363852  273955 start.go:309] post-start completed in 150.17334ms
	I0412 20:12:39.364261  273955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.398230  273955 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:12:39.398493  273955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:12:39.398539  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.431860  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:12:39.516960  273955 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:12:39.521105  273955 start.go:134] duration metric: createHost completed in 10.790568289s
	I0412 20:12:39.521135  273955 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 10.790743793s
	I0412 20:12:39.521226  273955 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.555917  273955 ssh_runner.go:195] Run: systemctl --version
	I0412 20:12:39.555988  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.556002  273955 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:12:39.556095  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:12:39.593681  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:12:39.595089  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:12:39.680814  273955 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:12:39.702816  273955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:12:39.714013  273955 docker.go:183] disabling docker service ...
	I0412 20:12:39.714094  273955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:12:39.732270  273955 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:12:39.743074  273955 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:12:39.832116  273955 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:12:39.919522  273955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:12:39.929367  273955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:12:39.942351  273955 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:12:39.956036  273955 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:12:39.963067  273955 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:12:39.969796  273955 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:12:40.049136  273955 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:12:40.116342  273955 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:12:40.116422  273955 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:12:40.120327  273955 start.go:462] Will wait 60s for crictl version
	I0412 20:12:40.120389  273955 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:12:40.146515  273955 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:12:40Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:12:51.193552  273955 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:12:51.239976  273955 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:12:51.240054  273955 ssh_runner.go:195] Run: containerd --version
	I0412 20:12:51.265928  273955 ssh_runner.go:195] Run: containerd --version
	I0412 20:12:51.292642  273955 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:12:51.292735  273955 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:12:51.330395  273955 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:12:51.334536  273955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:12:51.348868  273955 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:12:51.348965  273955 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:12:51.349036  273955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:12:51.384365  273955 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:12:51.384397  273955 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:12:51.384454  273955 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:12:51.414048  273955 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:12:51.414076  273955 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:12:51.414130  273955 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:12:51.447485  273955 cni.go:93] Creating CNI manager for ""
	I0412 20:12:51.447520  273955 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:12:51.447536  273955 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:12:51.447559  273955 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:12:51.447732  273955 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:12:51.447819  273955 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:12:51.447873  273955 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:12:51.456141  273955 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:12:51.456204  273955 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:12:51.463726  273955 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:12:51.478110  273955 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:12:51.493924  273955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:12:51.509099  273955 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:12:51.512397  273955 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:12:51.522429  273955 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:12:51.522557  273955 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:12:51.522608  273955 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:12:51.522672  273955 certs.go:302] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:12:51.522694  273955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.crt with IP's: []
	I0412 20:12:51.647888  273955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.crt ...
	I0412 20:12:51.647918  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.crt: {Name:mk12b6b515262d3e8419425543868c7997951fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:51.648168  273955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key ...
	I0412 20:12:51.648186  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key: {Name:mk7843f908a458661234845511f4229967014c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:51.648327  273955 certs.go:302] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:12:51.648352  273955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0412 20:12:51.958584  273955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt.dd3b5fb2 ...
	I0412 20:12:51.958625  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt.dd3b5fb2: {Name:mk31403330416b30e1e9136c0676bd5607171fb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:51.958857  273955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2 ...
	I0412 20:12:51.958868  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2: {Name:mk90aa83a32737a7e9eae2d2173c078c42766e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:51.958973  273955 certs.go:320] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt
	I0412 20:12:51.959039  273955 certs.go:324] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key
	I0412 20:12:51.959095  273955 certs.go:302] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:12:51.959110  273955 crypto.go:68] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt with IP's: []
	I0412 20:12:52.119332  273955 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt ...
	I0412 20:12:52.119367  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt: {Name:mk8b627a058fd8c6287a35792d17b31e6f9fed02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:52.119586  273955 crypto.go:164] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key ...
	I0412 20:12:52.119604  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key: {Name:mkaa09fd6ce1944157bd9f428963ed34b2046f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:12:52.119800  273955 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:12:52.119841  273955 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:12:52.119854  273955 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:12:52.119880  273955 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:12:52.119903  273955 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:12:52.119983  273955 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:12:52.120029  273955 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:12:52.120587  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:12:52.139881  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:12:52.158016  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:12:52.176799  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:12:52.195861  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:12:52.214820  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:12:52.233423  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:12:52.251146  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:12:52.269210  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:12:52.289345  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:12:52.309838  273955 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:12:52.328713  273955 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:12:52.342440  273955 ssh_runner.go:195] Run: openssl version
	I0412 20:12:52.347831  273955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:12:52.355996  273955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:12:52.359762  273955 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:12:52.359824  273955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:12:52.366991  273955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:12:52.375361  273955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:12:52.383496  273955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:12:52.386983  273955 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:12:52.387054  273955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:12:52.393142  273955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:12:52.401716  273955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:12:52.409615  273955 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:12:52.412997  273955 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:12:52.413062  273955 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:12:52.418063  273955 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:12:52.436368  273955 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:12:52.436503  273955 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:12:52.436547  273955 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:12:52.463962  273955 cri.go:87] found id: ""
	I0412 20:12:52.464135  273955 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:12:52.472543  273955 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:12:52.480950  273955 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:12:52.481032  273955 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:12:52.489705  273955 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:12:52.489752  273955 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:12:52.857037  273955 out.go:203]   - Generating certificates and keys ...
	I0412 20:12:55.532438  273955 out.go:203]   - Booting up control plane ...
	I0412 20:13:09.079967  273955 out.go:203]   - Configuring RBAC rules ...
	I0412 20:13:09.497355  273955 cni.go:93] Creating CNI manager for ""
	I0412 20:13:09.497388  273955 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:13:09.499389  273955 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:13:09.499466  273955 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:13:09.503511  273955 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:13:09.503540  273955 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:13:09.519365  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:13:10.285413  273955 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:13:10.285512  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006 minikube.k8s.io/updated_at=2022_04_12T20_13_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:10.285539  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:10.394374  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:10.394428  273955 ops.go:34] apiserver oom_adj: -16
	I0412 20:13:10.956912  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:11.457316  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:11.957310  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:12.456883  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:12.956796  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:13.456558  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:13.957284  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:14.456981  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:14.956344  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:15.456277  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:15.956926  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:16.457160  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:16.957053  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:17.457251  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:17.956338  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:18.457214  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:18.957304  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:19.457327  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:19.956855  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:20.457320  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:20.957318  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:21.457310  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:21.957323  273955 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:13:22.022980  273955 kubeadm.go:1020] duration metric: took 11.737535676s to wait for elevateKubeSystemPrivileges.
	I0412 20:13:22.023010  273955 kubeadm.go:393] StartCluster complete in 29.58665943s
	I0412 20:13:22.023028  273955 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:13:22.023136  273955 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:13:22.024837  273955 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:13:22.541847  273955 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220412201228-42006" rescaled to 1
	I0412 20:13:22.541929  273955 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:13:22.543958  273955 out.go:176] * Verifying Kubernetes components...
	I0412 20:13:22.544030  273955 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:13:22.541976  273955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:13:22.541997  273955 addons.go:415] enableAddons start: toEnable=map[], additional=[]
	I0412 20:13:22.542194  273955 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:13:22.544173  273955 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:13:22.544194  273955 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:13:22.544201  273955 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:13:22.544222  273955 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:13:22.544247  273955 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220412201228-42006"
	I0412 20:13:22.544249  273955 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:13:22.544646  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:13:22.545153  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:13:22.596551  273955 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:13:22.596721  273955 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:13:22.596740  273955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:13:22.596804  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:13:22.599680  273955 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:13:22.599716  273955 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:13:22.599753  273955 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:13:22.600271  273955 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:13:22.618769  273955 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:13:22.620682  273955 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:13:22.645824  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:13:22.647680  273955 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:13:22.647700  273955 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:13:22.647743  273955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:13:22.685416  273955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49412 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:13:22.899934  273955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:13:22.901415  273955 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:13:23.115148  273955 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 20:13:23.320852  273955 out.go:176] * Enabled addons: storage-provisioner, default-storageclass
	I0412 20:13:23.320880  273955 addons.go:417] enableAddons completed in 778.890046ms
	I0412 20:13:24.626663  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:26.626921  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:28.627069  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:30.627314  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:33.127458  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:35.127676  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:37.128242  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:39.627476  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:41.628056  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:44.127243  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:46.127979  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:48.627578  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:50.627689  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:52.627786  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:55.127812  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:57.128027  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:13:59.627274  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:01.627472  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:04.127534  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:06.627442  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:09.128201  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:11.627831  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:14.127986  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:16.627569  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:19.127780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:21.627899  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:24.127325  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:26.127416  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.127721  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:30.627141  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:32.627877  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:35.128008  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:37.628055  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:39.628134  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:41.628747  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:44.127896  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:46.627912  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:49.127578  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:51.627785  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:54.127667  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:56.627555  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:58.627673  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:01.127467  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:03.127958  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:05.627336  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:08.127482  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:10.128205  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:12.627006  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:14.627346  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:16.627715  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:19.127750  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:21.628033  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:24.127487  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:26.127773  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:28.627700  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:30.627863  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:32.627913  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:35.127918  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:37.627523  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:40.127924  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:42.627025  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:44.627571  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:46.628015  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:49.127289  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:51.627337  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:53.627707  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:56.127293  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:58.127903  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:00.128429  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:02.129651  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:04.627411  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:07.127206  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:09.128308  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:11.627780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:14.127483  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:16.627781  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:19.127539  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:21.627671  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:24.127732  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:26.627810  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:29.126973  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:31.128232  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:33.626978  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:35.627709  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:38.127682  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:40.627714  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:43.127935  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:45.627570  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:47.627702  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:50.127764  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:52.627288  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:55.127319  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:57.128161  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:59.627554  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:02.128657  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:04.627577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:07.127689  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:09.627222  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:12.127950  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:14.627403  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:17.127577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:19.128140  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:21.627231  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:22.630558  273955 node_ready.go:38] duration metric: took 4m0.009835916s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:17:22.633438  273955 out.go:176] 
	W0412 20:17:22.633564  273955 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:17:22.633578  273955 out.go:241] * 
	* 
	W0412 20:17:22.634288  273955 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:17:22.636885  273955 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:172: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220412201228-42006 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220412201228-42006
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220412201228-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f",
	        "Created": "2022-04-12T20:12:37.404174744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:12:37.803691082Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hosts",
	        "LogPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f-json.log",
	        "Name": "/default-k8s-different-port-20220412201228-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220412201228-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220412201228-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220412201228-42006",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220412201228-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220412201228-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fdc8d9902df162e7f584a615cf1a67a1ddf8a0e7aa58b4c4180e9bac803f9952",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fdc8d9902df1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220412201228-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6642b489f963",
	                        "default-k8s-different-port-20220412201228-42006"
	                    ],
	                    "NetworkID": "e1e5eb80641804e0cf03f9ee1959284f2ec05fd6c94f6b6eb19931fc6032414c",
	                    "EndpointID": "dc02bef0f4abc1393769df835a0a013dde3e78db69d9fafacbeb8f560aaccea3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/FirstStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                  Profile                   |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| ssh     | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:07:57 UTC | Tue, 12 Apr 2022 20:07:58 UTC |
	|         | pgrep -a kubelet                                           |                                            |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:09:15 UTC | Tue, 12 Apr 2022 20:09:16 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006           | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:10:08 UTC | Tue, 12 Apr 2022 20:10:09 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| start   | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:06:38 UTC | Tue, 12 Apr 2022 20:12:02 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                            |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                            |         |         |                               |                               |
	|         | --driver=docker                                            |                                            |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:20 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                            |         |         |                               |                               |
	| pause   | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:20 UTC | Tue, 12 Apr 2022 20:12:21 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| unpause | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:22 UTC | Tue, 12 Apr 2022 20:12:23 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:24 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20220412200453-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                            |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                            |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                            |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                            |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                            |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                            |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                            |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                            |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                            |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                            |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                            |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                            |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                            |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                            |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                            |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                            |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                            |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                            |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                            |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006       | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                            |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                            |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                            |         |         |                               |                               |
	|---------|------------------------------------------------------------|--------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:14:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:14:08.832397  282203 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:14:08.832526  282203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:14:08.832537  282203 out.go:310] Setting ErrFile to fd 2...
	I0412 20:14:08.832541  282203 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:14:08.832644  282203 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:14:08.832908  282203 out.go:304] Setting JSON to false
	I0412 20:14:08.834493  282203 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10602,"bootTime":1649783847,"procs":547,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:14:08.834611  282203 start.go:125] virtualization: kvm guest
	I0412 20:14:08.837207  282203 out.go:176] * [newest-cni-20220412201253-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:14:08.838808  282203 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:14:08.837440  282203 notify.go:193] Checking for updates...
	I0412 20:14:08.840190  282203 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:14:08.841789  282203 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:08.843251  282203 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:14:08.844774  282203 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:14:08.845319  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:08.845793  282203 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:14:08.892101  282203 docker.go:137] docker version: linux-20.10.14
	I0412 20:14:08.892248  282203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:14:08.993547  282203 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:14:08.923798845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:14:08.993679  282203 docker.go:254] overlay module found
	I0412 20:14:08.996175  282203 out.go:176] * Using the docker driver based on existing profile
	I0412 20:14:08.996210  282203 start.go:284] selected driver: docker
	I0412 20:14:08.996217  282203 start.go:801] validating driver "docker" against &{Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[Met
ricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:08.996338  282203 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:14:08.996376  282203 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:14:08.996397  282203 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:14:08.998211  282203 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:14:08.998861  282203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:14:09.094596  282203 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:49 SystemTime:2022-04-12 20:14:09.030624528 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:14:09.094806  282203 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:14:09.094836  282203 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:14:09.096887  282203 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:14:09.097012  282203 start_flags.go:866] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0412 20:14:09.097039  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:09.097046  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:09.097054  282203 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:14:09.097062  282203 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 20:14:09.097069  282203 start_flags.go:306] config:
	{Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false d
efault_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:09.099506  282203 out.go:176] * Starting control plane node newest-cni-20220412201253-42006 in cluster newest-cni-20220412201253-42006
	I0412 20:14:09.099556  282203 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:14:09.101249  282203 out.go:176] * Pulling base image ...
	I0412 20:14:09.101287  282203 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:14:09.101322  282203 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:14:09.101342  282203 cache.go:57] Caching tarball of preloaded images
	I0412 20:14:09.101401  282203 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:14:09.101566  282203 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:14:09.101582  282203 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.6-rc.0 on containerd
	I0412 20:14:09.101721  282203 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/config.json ...
	I0412 20:14:09.147707  282203 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:14:09.147734  282203 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:14:09.147748  282203 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:14:09.147784  282203 start.go:352] acquiring machines lock for newest-cni-20220412201253-42006: {Name:mk0dccf8a2654d003d8787479cf4abb87e60a916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:14:09.147896  282203 start.go:356] acquired machines lock for "newest-cni-20220412201253-42006" in 84.854µs
	I0412 20:14:09.147923  282203 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:14:09.147932  282203 fix.go:55] fixHost starting: 
	I0412 20:14:09.148209  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:09.182695  282203 fix.go:103] recreateIfNeeded on newest-cni-20220412201253-42006: state=Stopped err=<nil>
	W0412 20:14:09.182743  282203 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:14:09.128201  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:11.627831  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:09.185311  282203 out.go:176] * Restarting existing docker container for "newest-cni-20220412201253-42006" ...
	I0412 20:14:09.185403  282203 cli_runner.go:164] Run: docker start newest-cni-20220412201253-42006
	I0412 20:14:09.582922  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:09.620698  282203 kic.go:416] container "newest-cni-20220412201253-42006" state is running.
	I0412 20:14:09.621213  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:09.657122  282203 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/config.json ...
	I0412 20:14:09.657367  282203 machine.go:88] provisioning docker machine ...
	I0412 20:14:09.657398  282203 ubuntu.go:169] provisioning hostname "newest-cni-20220412201253-42006"
	I0412 20:14:09.657457  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:09.694424  282203 main.go:134] libmachine: Using SSH client type: native
	I0412 20:14:09.694593  282203 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0412 20:14:09.694609  282203 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-20220412201253-42006 && echo "newest-cni-20220412201253-42006" | sudo tee /etc/hostname
	I0412 20:14:09.695270  282203 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55074->127.0.0.1:49422: read: connection reset by peer
	I0412 20:14:12.826188  282203 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-20220412201253-42006
	
	I0412 20:14:12.826283  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:12.860717  282203 main.go:134] libmachine: Using SSH client type: native
	I0412 20:14:12.860887  282203 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49422 <nil> <nil>}
	I0412 20:14:12.860908  282203 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20220412201253-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20220412201253-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20220412201253-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:14:12.984427  282203 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:14:12.984458  282203 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:14:12.984485  282203 ubuntu.go:177] setting up certificates
	I0412 20:14:12.984495  282203 provision.go:83] configureAuth start
	I0412 20:14:12.984546  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:13.022286  282203 provision.go:138] copyHostCerts
	I0412 20:14:13.022359  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:14:13.022434  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:14:13.022507  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:14:13.022629  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:14:13.022645  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:14:13.022670  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:14:13.022733  282203 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:14:13.022741  282203 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:14:13.022761  282203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:14:13.022827  282203 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20220412201253-42006 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-20220412201253-42006]
	I0412 20:14:13.147393  282203 provision.go:172] copyRemoteCerts
	I0412 20:14:13.147461  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:14:13.147499  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.182738  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.271719  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:14:13.291955  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0412 20:14:13.311640  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:14:13.330587  282203 provision.go:86] duration metric: configureAuth took 346.079902ms
	I0412 20:14:13.330615  282203 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:14:13.330805  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:13.330817  282203 machine.go:91] provisioned docker machine in 3.673434359s
	I0412 20:14:13.330823  282203 start.go:306] post-start starting for "newest-cni-20220412201253-42006" (driver="docker")
	I0412 20:14:13.330829  282203 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:14:13.330883  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:14:13.330918  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.365737  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.460195  282203 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:14:13.463475  282203 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:14:13.463524  282203 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:14:13.463538  282203 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:14:13.463544  282203 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:14:13.463556  282203 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:14:13.463617  282203 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:14:13.463682  282203 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:14:13.463765  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:14:13.471624  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:14:13.491654  282203 start.go:309] post-start completed in 160.815375ms
	I0412 20:14:13.491734  282203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:14:13.491791  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.529484  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.616940  282203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:14:13.621059  282203 fix.go:57] fixHost completed within 4.473117291s
	I0412 20:14:13.621091  282203 start.go:81] releasing machines lock for "newest-cni-20220412201253-42006", held for 4.473181182s
	I0412 20:14:13.621178  282203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-20220412201253-42006
	I0412 20:14:13.655978  282203 ssh_runner.go:195] Run: systemctl --version
	I0412 20:14:13.656014  282203 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:14:13.656038  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.656108  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:13.692203  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.693258  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:13.795984  282203 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:14:13.808689  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:14:13.820011  282203 docker.go:183] disabling docker service ...
	I0412 20:14:13.820092  282203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:14:13.830551  282203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:14:14.127986  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:16.627569  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:13.840509  282203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:14:13.920197  282203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:14:13.996299  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:14:14.006773  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:14:14.020629  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:14:14.035412  282203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:14:14.042432  282203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:14:14.049388  282203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:14:14.128037  282203 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:14:14.201778  282203 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:14:14.201900  282203 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:14:14.206190  282203 start.go:462] Will wait 60s for crictl version
	I0412 20:14:14.206249  282203 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:14:14.233021  282203 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:14:14Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:14:19.127780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:21.627899  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:25.280259  282203 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:14:25.305913  282203 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:14:25.305972  282203 ssh_runner.go:195] Run: containerd --version
	I0412 20:14:25.329153  282203 ssh_runner.go:195] Run: containerd --version
	I0412 20:14:25.353837  282203 out.go:176] * Preparing Kubernetes v1.23.6-rc.0 on containerd 1.5.10 ...
	I0412 20:14:25.353941  282203 cli_runner.go:164] Run: docker network inspect newest-cni-20220412201253-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:14:25.390025  282203 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0412 20:14:25.393752  282203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:14:25.406736  282203 out.go:176]   - kubelet.network-plugin=cni
	I0412 20:14:25.408682  282203 out.go:176]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0412 20:14:24.127325  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:26.127416  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.127721  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:25.410319  282203 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:14:25.410383  282203 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 20:14:25.410438  282203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:14:25.435000  282203 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:14:25.435025  282203 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:14:25.435069  282203 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:14:25.460785  282203 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:14:25.460815  282203 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:14:25.460865  282203 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:14:25.486553  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:25.486581  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:25.486596  282203 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0412 20:14:25.486612  282203 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.23.6-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20220412201253-42006 NodeName:newest-cni-20220412201253-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leade
r-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:14:25.486771  282203 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20220412201253-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.6-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:14:25.486858  282203 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.6-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20220412201253-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:14:25.486911  282203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.6-rc.0
	I0412 20:14:25.495243  282203 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:14:25.495328  282203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:14:25.502983  282203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (618 bytes)
	I0412 20:14:25.516969  282203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0412 20:14:25.530231  282203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2201 bytes)
	I0412 20:14:25.544174  282203 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:14:25.547463  282203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:14:25.557235  282203 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006 for IP: 192.168.76.2
	I0412 20:14:25.557346  282203 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:14:25.557383  282203 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:14:25.557447  282203 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/client.key
	I0412 20:14:25.557553  282203 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.key.31bdca25
	I0412 20:14:25.557606  282203 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.key
	I0412 20:14:25.557698  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:14:25.557730  282203 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:14:25.557745  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:14:25.557768  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:14:25.557791  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:14:25.557819  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:14:25.557861  282203 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:14:25.558574  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:14:25.577575  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0412 20:14:25.597461  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:14:25.617831  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/newest-cni-20220412201253-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:14:25.637035  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:14:25.655577  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:14:25.673593  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:14:25.693796  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:14:25.713653  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:14:25.732646  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:14:25.751515  282203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:14:25.770576  282203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:14:25.784726  282203 ssh_runner.go:195] Run: openssl version
	I0412 20:14:25.790079  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:14:25.799378  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.802945  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.803028  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:14:25.808734  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:14:25.816535  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:14:25.825325  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.828750  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.828803  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:14:25.834167  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:14:25.841792  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:14:25.850010  282203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.853624  282203 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.853701  282203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:14:25.859058  282203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:14:25.866757  282203 kubeadm.go:391] StartCluster: {Name:newest-cni-20220412201253-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:newest-cni-20220412201253-42006 Namespace:default APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.doma
in] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:14:25.866859  282203 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:14:25.866908  282203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:14:25.894574  282203 cri.go:87] found id: "d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67"
	I0412 20:14:25.894601  282203 cri.go:87] found id: "1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9"
	I0412 20:14:25.894608  282203 cri.go:87] found id: "0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd"
	I0412 20:14:25.894614  282203 cri.go:87] found id: "a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3"
	I0412 20:14:25.894619  282203 cri.go:87] found id: "86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356"
	I0412 20:14:25.894631  282203 cri.go:87] found id: "7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d"
	I0412 20:14:25.894637  282203 cri.go:87] found id: ""
	I0412 20:14:25.894696  282203 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:14:25.909659  282203 cri.go:114] JSON = null
	W0412 20:14:25.909724  282203 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:14:25.909774  282203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:14:25.917474  282203 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:14:25.917508  282203 kubeadm.go:601] restartCluster start
	I0412 20:14:25.917553  282203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:14:25.925481  282203 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:25.926482  282203 kubeconfig.go:116] verify returned: extract IP: "newest-cni-20220412201253-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:25.927149  282203 kubeconfig.go:127] "newest-cni-20220412201253-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:14:25.928050  282203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:25.929973  282203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:14:25.937574  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:25.937643  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:25.946692  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.147196  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.147313  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.157070  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.347407  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.347480  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.356517  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.547770  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.547871  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.557039  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.747366  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.747450  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.757308  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:26.947424  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:26.947524  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:26.956488  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.147733  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.147821  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.156974  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.347245  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.347355  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.356556  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.547767  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.547845  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.557055  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.747315  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.747407  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.756437  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:27.947668  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:27.947755  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:27.956980  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.147211  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.147335  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.156358  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.347634  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.347710  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.356777  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.546978  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.547079  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.555852  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.746989  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.747054  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.755735  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:30.627141  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:32.627877  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:28.947273  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.947359  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.956917  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.956943  282203 api_server.go:165] Checking apiserver status ...
	I0412 20:14:28.956997  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:14:28.965673  282203 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:28.965703  282203 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:14:28.965712  282203 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:14:28.965726  282203 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:14:28.965780  282203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:14:28.994340  282203 cri.go:87] found id: "d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67"
	I0412 20:14:28.994369  282203 cri.go:87] found id: "1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9"
	I0412 20:14:28.994378  282203 cri.go:87] found id: "0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd"
	I0412 20:14:28.994392  282203 cri.go:87] found id: "a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3"
	I0412 20:14:28.994401  282203 cri.go:87] found id: "86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356"
	I0412 20:14:28.994410  282203 cri.go:87] found id: "7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d"
	I0412 20:14:28.994419  282203 cri.go:87] found id: ""
	I0412 20:14:28.994431  282203 cri.go:232] Stopping containers: [d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67 1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9 0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3 86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356 7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d]
	I0412 20:14:28.994486  282203 ssh_runner.go:195] Run: which crictl
	I0412 20:14:28.997755  282203 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop d969ce6ca95955b480d8655ab7bd7a09dabfb293b5353339e504f1f33b9eff67 1d38fd300b7c85004f77d83cbb475438790ef3b9d337060fdb1b819d68d35ec9 0bb8f66256b11644865229170aad9e34ea182a35e5158387000ff3b1865202fd a242ae4af2407bb2e31ddb8d71f49ef4cb0ff85cc236478c5f9535fa5c980eb3 86c36d2f4f49c410f131864116fb679629344c479e0e487369a21787e119a356 7c408f89710edca0b859d2e677ea93d81c6f5d56606b251c3a3d527ab1b6743d
	I0412 20:14:29.026024  282203 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:14:29.037162  282203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:14:29.044772  282203 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Apr 12 20:13 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:13 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2063 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 12 20:13 /etc/kubernetes/scheduler.conf
	
	I0412 20:14:29.044835  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:14:29.052237  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:14:29.059409  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:14:29.066564  282203 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:29.066629  282203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:14:29.073927  282203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:14:29.081806  282203 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:14:29.081873  282203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:14:29.089097  282203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:14:29.097286  282203 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:14:29.097318  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.143554  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.837517  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:29.985443  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:30.038605  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:30.112525  282203 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:14:30.112599  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:30.622626  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:31.122421  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:31.622412  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:32.122000  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:32.622749  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:33.122311  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:33.622220  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.128008  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:37.628055  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:34.122750  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:34.622370  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.122375  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:35.622023  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:36.122611  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:36.192499  282203 api_server.go:71] duration metric: took 6.079970753s to wait for apiserver process to appear ...
	I0412 20:14:36.192531  282203 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:14:36.192547  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:36.192951  282203 api_server.go:256] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0412 20:14:36.693238  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.081785  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:14:39.081830  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:14:39.193101  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.198543  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:39.198577  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:39.693125  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:39.698513  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:39.698546  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:40.194142  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:40.199360  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:14:40.199402  282203 api_server.go:102] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:14:40.693984  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:40.698538  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:14:40.704599  282203 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:14:40.704627  282203 api_server.go:130] duration metric: took 4.512088959s to wait for apiserver health ...
	I0412 20:14:40.704637  282203 cni.go:93] Creating CNI manager for ""
	I0412 20:14:40.704648  282203 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:14:40.707243  282203 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:14:40.707307  282203 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:14:40.711258  282203 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl ...
	I0412 20:14:40.711285  282203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:14:40.725079  282203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:14:41.409231  282203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:14:41.417820  282203 system_pods.go:59] 9 kube-system pods found
	I0412 20:14:41.417861  282203 system_pods.go:61] "coredns-64897985d-4bvbc" [fb9e8493-9c0d-4e05-b53a-1749537e5040] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417873  282203 system_pods.go:61] "etcd-newest-cni-20220412201253-42006" [3aad179e-c3c7-4666-a6d3-d255640590a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:14:41.417889  282203 system_pods.go:61] "kindnet-n5jt7" [a91f07c6-2b78-4581-b9ac-f3a3c3626dd8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:14:41.417894  282203 system_pods.go:61] "kube-apiserver-newest-cni-20220412201253-42006" [2d4d9c73-5232-4a9c-99fb-7b9006cf532b] Running
	I0412 20:14:41.417903  282203 system_pods.go:61] "kube-controller-manager-newest-cni-20220412201253-42006" [ddacb408-0fe4-4726-b426-a84e7d23a1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:14:41.417913  282203 system_pods.go:61] "kube-proxy-jp96c" [3b9c939e-cafa-4614-a930-02dbf11e941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:14:41.417920  282203 system_pods.go:61] "kube-scheduler-newest-cni-20220412201253-42006" [7cc7f50d-6fe0-405a-9438-00b84708bcdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:14:41.417932  282203 system_pods.go:61] "metrics-server-b955d9d8-99nk4" [68d97c36-9d61-4926-bd17-e63396989cc8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417938  282203 system_pods.go:61] "storage-provisioner" [43ce4397-4b28-450b-b967-f8f2b597585c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.417944  282203 system_pods.go:74] duration metric: took 8.691981ms to wait for pod list to return data ...
	I0412 20:14:41.417956  282203 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:14:41.421510  282203 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:14:41.421537  282203 node_conditions.go:123] node cpu capacity is 8
	I0412 20:14:41.421549  282203 node_conditions.go:105] duration metric: took 3.589136ms to run NodePressure ...
	I0412 20:14:41.421570  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.6-rc.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:14:41.576233  282203 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:14:41.583862  282203 ops.go:34] apiserver oom_adj: -16
	I0412 20:14:41.583887  282203 kubeadm.go:605] restartCluster took 15.666373103s
	I0412 20:14:41.583897  282203 kubeadm.go:393] StartCluster complete in 15.717149501s
	I0412 20:14:41.583915  282203 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:41.584019  282203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:14:41.586119  282203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:14:41.591379  282203 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20220412201253-42006" rescaled to 1
	I0412 20:14:41.591451  282203 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.23.6-rc.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:14:41.593719  282203 out.go:176] * Verifying Kubernetes components...
	I0412 20:14:41.591533  282203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:14:41.591554  282203 addons.go:415] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0412 20:14:41.591660  282203 config.go:178] Loaded profile config "newest-cni-20220412201253-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.6-rc.0
	I0412 20:14:41.593837  282203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:14:41.593881  282203 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593907  282203 addons.go:153] Setting addon storage-provisioner=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.593912  282203 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:14:41.593947  282203 addons.go:65] Setting default-storageclass=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593971  282203 addons.go:65] Setting metrics-server=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.593979  282203 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20220412201253-42006"
	I0412 20:14:41.593992  282203 addons.go:153] Setting addon metrics-server=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.594005  282203 addons.go:165] addon metrics-server should already be in state true
	I0412 20:14:41.593995  282203 addons.go:65] Setting dashboard=true in profile "newest-cni-20220412201253-42006"
	I0412 20:14:41.594043  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.593973  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.594045  282203 addons.go:153] Setting addon dashboard=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.594280  282203 addons.go:165] addon dashboard should already be in state true
	I0412 20:14:41.594328  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.594334  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594502  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594639  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.594799  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.645907  282203 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:14:41.648341  282203 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:14:41.650175  282203 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:14:41.651645  282203 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:14:41.648424  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:14:41.651681  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:14:41.650260  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:14:41.651782  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:14:41.651798  282203 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:14:41.651811  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:14:41.651751  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.651850  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.651850  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.667707  282203 addons.go:153] Setting addon default-storageclass=true in "newest-cni-20220412201253-42006"
	W0412 20:14:41.667739  282203 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:14:41.667770  282203 host.go:66] Checking if "newest-cni-20220412201253-42006" exists ...
	I0412 20:14:41.668264  282203 cli_runner.go:164] Run: docker container inspect newest-cni-20220412201253-42006 --format={{.State.Status}}
	I0412 20:14:41.679431  282203 start.go:757] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0412 20:14:41.679495  282203 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:14:41.679542  282203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:14:41.692016  282203 api_server.go:71] duration metric: took 100.509345ms to wait for apiserver process to appear ...
	I0412 20:14:41.692053  282203 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:14:41.692097  282203 api_server.go:240] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0412 20:14:41.698410  282203 api_server.go:266] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0412 20:14:41.699444  282203 api_server.go:140] control plane version: v1.23.6-rc.0
	I0412 20:14:41.699470  282203 api_server.go:130] duration metric: took 7.409196ms to wait for apiserver health ...
	I0412 20:14:41.699481  282203 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:14:41.701111  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.706303  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.706406  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.707318  282203 system_pods.go:59] 9 kube-system pods found
	I0412 20:14:41.707353  282203 system_pods.go:61] "coredns-64897985d-4bvbc" [fb9e8493-9c0d-4e05-b53a-1749537e5040] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707367  282203 system_pods.go:61] "etcd-newest-cni-20220412201253-42006" [3aad179e-c3c7-4666-a6d3-d255640590a8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:14:41.707377  282203 system_pods.go:61] "kindnet-n5jt7" [a91f07c6-2b78-4581-b9ac-f3a3c3626dd8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:14:41.707389  282203 system_pods.go:61] "kube-apiserver-newest-cni-20220412201253-42006" [2d4d9c73-5232-4a9c-99fb-7b9006cf532b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:14:41.707406  282203 system_pods.go:61] "kube-controller-manager-newest-cni-20220412201253-42006" [ddacb408-0fe4-4726-b426-a84e7d23a1c0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:14:41.707419  282203 system_pods.go:61] "kube-proxy-jp96c" [3b9c939e-cafa-4614-a930-02dbf11e941f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:14:41.707429  282203 system_pods.go:61] "kube-scheduler-newest-cni-20220412201253-42006" [7cc7f50d-6fe0-405a-9438-00b84708bcdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:14:41.707446  282203 system_pods.go:61] "metrics-server-b955d9d8-99nk4" [68d97c36-9d61-4926-bd17-e63396989cc8] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707460  282203 system_pods.go:61] "storage-provisioner" [43ce4397-4b28-450b-b967-f8f2b597585c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:14:41.707470  282203 system_pods.go:74] duration metric: took 7.981821ms to wait for pod list to return data ...
	I0412 20:14:41.707485  282203 default_sa.go:34] waiting for default service account to be created ...
	I0412 20:14:41.710431  282203 default_sa.go:45] found service account: "default"
	I0412 20:14:41.710468  282203 default_sa.go:55] duration metric: took 2.960657ms for default service account to be created ...
	I0412 20:14:41.710484  282203 kubeadm.go:548] duration metric: took 118.993322ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0412 20:14:41.710512  282203 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:14:41.713571  282203 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:14:41.713602  282203 node_conditions.go:123] node cpu capacity is 8
	I0412 20:14:41.713615  282203 node_conditions.go:105] duration metric: took 3.097862ms to run NodePressure ...
	I0412 20:14:41.713630  282203 start.go:213] waiting for startup goroutines ...
	I0412 20:14:41.720393  282203 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:14:41.720422  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:14:41.720491  282203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-20220412201253-42006
	I0412 20:14:41.757709  282203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49422 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/newest-cni-20220412201253-42006/id_rsa Username:docker}
	I0412 20:14:41.804226  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:14:41.804481  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:14:41.804508  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:14:41.804720  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:14:41.804748  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:14:41.819378  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:14:41.819406  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:14:41.819826  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:14:41.819846  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:14:41.834332  282203 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:14:41.834367  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:14:41.834666  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:14:41.834688  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:14:41.885128  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:14:41.885162  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:14:41.887023  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:14:41.887024  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:14:41.904985  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:14:41.905020  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:14:41.984315  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:14:41.984351  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:14:42.005906  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:14:42.005935  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:14:42.084416  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:14:42.084456  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:14:42.108756  282203 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:14:42.108790  282203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:14:42.191600  282203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.6-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:14:42.390295  282203 addons.go:386] Verifying addon metrics-server=true in "newest-cni-20220412201253-42006"
	I0412 20:14:42.587518  282203 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:14:42.587549  282203 addons.go:417] enableAddons completed in 996.00198ms
	I0412 20:14:42.625739  282203 start.go:499] kubectl: 1.23.5, cluster: 1.23.6-rc.0 (minor skew: 0)
	I0412 20:14:42.628049  282203 out.go:176] * Done! kubectl is now configured to use "newest-cni-20220412201253-42006" cluster and "default" namespace by default
	I0412 20:14:39.628134  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:41.628747  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:44.127896  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:46.627912  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:49.127578  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:51.627785  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:54.127667  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:56.627555  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:14:58.627673  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:01.127467  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:03.127958  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:05.627336  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:08.127482  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:10.128205  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:12.627006  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:14.627346  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:16.627715  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:19.127750  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:21.628033  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:24.127487  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:26.127773  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:28.627700  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:30.627863  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:32.627913  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:35.127918  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:37.627523  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:40.127924  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:42.627025  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:44.627571  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:46.628015  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:49.127289  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:51.627337  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:53.627707  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:56.127293  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:15:58.127903  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:00.128429  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:02.129651  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:04.627411  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:07.127206  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:09.128308  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:11.627780  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:14.127483  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:16.627781  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:19.127539  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:21.627671  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:24.127732  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:26.627810  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:29.126973  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:31.128232  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:33.626978  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:35.627709  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:38.127682  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:40.627714  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:43.127935  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:45.627570  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:47.627702  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:50.127764  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:52.627288  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:55.127319  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:57.128161  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:16:59.627554  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:02.128657  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:04.627577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:07.127689  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:09.627222  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:12.127950  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:14.627403  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:17.127577  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:19.128140  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:21.627231  273955 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:17:22.630558  273955 node_ready.go:38] duration metric: took 4m0.009835916s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:17:22.633438  273955 out.go:176] 
	W0412 20:17:22.633564  273955 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:17:22.633578  273955 out.go:241] * 
	W0412 20:17:22.634288  273955 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9e5744237dfde       6de166512aa22       30 seconds ago      Exited              kindnet-cni               5                   eac241d106cdd
	e86db06fb9ce1       3c53fa8541f95       4 minutes ago       Running             kube-proxy                0                   484376a2ef747
	51def5f5fb57c       25f8c7f3da61c       4 minutes ago       Running             etcd                      0                   fceaa872be874
	3c8657a1a5932       884d49d6d8c9f       4 minutes ago       Running             kube-scheduler            0                   ac91422e769ae
	1032ec9dc604b       3fc1d62d65872       4 minutes ago       Running             kube-apiserver            0                   c698f24911d58
	71af7fb31571e       b0c9e5e4dbb14       4 minutes ago       Running             kube-controller-manager   0                   32d426a8d8c0a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:12:38 UTC, end at Tue 2022-04-12 20:17:23 UTC. --
	Apr 12 20:14:39 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:14:39.222970747Z" level=warning msg="cleaning up after shim disconnected" id=20510a89158e5e7d7501e60ebb7bbe846ae003bd5c0afe6273713aaa039c6941 namespace=k8s.io
	Apr 12 20:14:39 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:14:39.222985111Z" level=info msg="cleaning up dead shim"
	Apr 12 20:14:39 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:14:39.234246280Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:14:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2254\n"
	Apr 12 20:14:39 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:14:39.865024583Z" level=info msg="RemoveContainer for \"575b6364e649796f16ac10951174a355ee1fdbfb9b34700762002d9486457902\""
	Apr 12 20:14:39 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:14:39.869406648Z" level=info msg="RemoveContainer for \"575b6364e649796f16ac10951174a355ee1fdbfb9b34700762002d9486457902\" returns successfully"
	Apr 12 20:15:19 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:19.682829175Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Apr 12 20:15:19 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:19.696688713Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\""
	Apr 12 20:15:19 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:19.697198697Z" level=info msg="StartContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\""
	Apr 12 20:15:19 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:19.801208242Z" level=info msg="StartContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\" returns successfully"
	Apr 12 20:15:30 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:30.129892775Z" level=info msg="shim disconnected" id=07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a
	Apr 12 20:15:30 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:30.129951590Z" level=warning msg="cleaning up after shim disconnected" id=07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a namespace=k8s.io
	Apr 12 20:15:30 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:30.129966787Z" level=info msg="cleaning up dead shim"
	Apr 12 20:15:30 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:30.140930842Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:15:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2336\n"
	Apr 12 20:15:30 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:30.959733248Z" level=info msg="RemoveContainer for \"20510a89158e5e7d7501e60ebb7bbe846ae003bd5c0afe6273713aaa039c6941\""
	Apr 12 20:15:30 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:15:30.965091199Z" level=info msg="RemoveContainer for \"20510a89158e5e7d7501e60ebb7bbe846ae003bd5c0afe6273713aaa039c6941\" returns successfully"
	Apr 12 20:16:53 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:16:53.681970018Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Apr 12 20:16:53 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:16:53.695383955Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\""
	Apr 12 20:16:53 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:16:53.695942937Z" level=info msg="StartContainer for \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\""
	Apr 12 20:16:53 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:16:53.804658155Z" level=info msg="StartContainer for \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\" returns successfully"
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.122947815Z" level=info msg="shim disconnected" id=9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.123044477Z" level=warning msg="cleaning up after shim disconnected" id=9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f namespace=k8s.io
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.123061425Z" level=info msg="cleaning up dead shim"
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.134824337Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2420\n"
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:05.135559073Z" level=info msg="RemoveContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\""
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:05.141079170Z" level=info msg="RemoveContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220412201228-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220412201228-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_13_10_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:13:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220412201228-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:17:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:13:21 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:13:21 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:13:21 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:13:21 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220412201228-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ef825856-4086-4c06-9629-95bede787d92
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220412201228-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m9s
	  kube-system                 kindnet-852v4                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220412201228-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220412201228-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-proxy-nfsgp                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220412201228-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m1s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m23s)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x5 over 4m23s)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x4 over 4m23s)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m9s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m9s                   kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s                   kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s                   kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646] <==
	* {"level":"info","ts":"2022-04-12T20:13:03.194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-04-12T20:13:03.194Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220412201228-42006 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:13:04.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:17:23 up  2:59,  0 users,  load average: 0.17, 0.72, 1.35
	Linux default-k8s-different-port-20220412201228-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c] <==
	* I0412 20:13:06.378891       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:13:06.380058       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0412 20:13:06.380365       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0412 20:13:06.380526       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 20:13:06.380660       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 20:13:06.389081       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 20:13:07.223083       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:13:07.223129       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:13:07.227737       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 20:13:07.231059       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:13:07.231090       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 20:13:07.640851       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:13:07.682744       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 20:13:07.805024       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 20:13:07.813172       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0412 20:13:07.814261       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:13:07.818683       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:13:08.360879       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:13:09.411225       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:13:09.419785       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:13:09.431828       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:13:14.599758       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:13:21.818265       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:13:21.968747       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:13:22.481492       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda] <==
	* I0412 20:13:21.215625       1 shared_informer.go:247] Caches are synced for taint 
	I0412 20:13:21.215676       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0412 20:13:21.215694       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0412 20:13:21.215761       1 node_lifecycle_controller.go:1012] Missing timestamp for Node default-k8s-different-port-20220412201228-42006. Assuming now as a timestamp.
	I0412 20:13:21.215805       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0412 20:13:21.215859       1 event.go:294] "Event occurred" object="default-k8s-different-port-20220412201228-42006" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node default-k8s-different-port-20220412201228-42006 event: Registered Node default-k8s-different-port-20220412201228-42006 in Controller"
	I0412 20:13:21.229490       1 shared_informer.go:247] Caches are synced for deployment 
	I0412 20:13:21.315704       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0412 20:13:21.360412       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 20:13:21.360445       1 disruption.go:371] Sending events to api server.
	I0412 20:13:21.368497       1 shared_informer.go:247] Caches are synced for HPA 
	I0412 20:13:21.385835       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:13:21.400192       1 shared_informer.go:247] Caches are synced for endpoint 
	I0412 20:13:21.411344       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0412 20:13:21.424347       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:13:21.821606       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:13:21.821636       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:13:21.825308       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-852v4"
	I0412 20:13:21.825372       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nfsgp"
	I0412 20:13:21.839671       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:13:21.971282       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 20:13:22.044641       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 20:13:22.121317       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rmqrj"
	I0412 20:13:22.126350       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-c2gzm"
	I0412 20:13:22.145463       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-rmqrj"
	
	* 
	* ==> kube-proxy [e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848] <==
	* I0412 20:13:22.455007       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0412 20:13:22.455073       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0412 20:13:22.455117       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:13:22.478285       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:13:22.478320       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:13:22.478326       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:13:22.478350       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:13:22.478788       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:13:22.479353       1 config.go:317] "Starting service config controller"
	I0412 20:13:22.479385       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:13:22.479423       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:13:22.479433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:13:22.579611       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:13:22.579633       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd] <==
	* W0412 20:13:06.388989       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:13:06.389007       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:13:06.389014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:13:06.389018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:13:06.389730       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:13:06.389771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:13:07.206657       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:13:07.206707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0412 20:13:07.265873       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:13:07.265925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:13:07.296201       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:13:07.296245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:13:07.302602       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.302649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.338917       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:13:07.338952       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:13:07.341982       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.342023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.427305       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.427338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.446555       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:13:07.446595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:13:07.468839       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.468878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0412 20:13:07.903442       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:12:38 UTC, end at Tue 2022-04-12 20:17:24 UTC. --
	Apr 12 20:16:11 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:11.680236    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:16:14 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:14.832714    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:19 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:19.833991    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:24 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:24.835621    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:26 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:16:26.680276    1290 scope.go:110] "RemoveContainer" containerID="07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a"
	Apr 12 20:16:26 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:26.680599    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:16:29 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:29.836677    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:34 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:34.837638    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:39 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:16:39.679934    1290 scope.go:110] "RemoveContainer" containerID="07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a"
	Apr 12 20:16:39 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:39.680298    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:16:39 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:39.838923    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:44 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:44.840012    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:49 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:49.840990    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:53 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:16:53.679818    1290 scope.go:110] "RemoveContainer" containerID="07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a"
	Apr 12 20:16:54 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:54.842613    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:16:59 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:16:59.844004    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:17:04.845537    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:17:05.134471    1290 scope.go:110] "RemoveContainer" containerID="07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a"
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:17:05.134887    1290 scope.go:110] "RemoveContainer" containerID="9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f"
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:17:05.135205    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:17:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:17:09.846829    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:14 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:17:14.847546    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:17:18 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:17:18.680263    1290 scope.go:110] "RemoveContainer" containerID="9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f"
	Apr 12 20:17:18 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:17:18.680679    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:17:19 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:17:19.848964    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-c2gzm storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/FirstStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-c2gzm storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-c2gzm storage-provisioner: exit status 1 (51.468849ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-c2gzm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-c2gzm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/FirstStart (296.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [aa787663-f796-4e75-b849-7a88b014969a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:180: ***** TestStartStop/group/default-k8s-different-port/serial/DeployApp: pod "integration-test=busybox" failed to start within 8m0s: timed out waiting for the condition ****
start_stop_delete_test.go:180: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
start_stop_delete_test.go:180: TestStartStop/group/default-k8s-different-port/serial/DeployApp: showing logs for failed pods as of 2022-04-12 20:25:25.489803514 +0000 UTC m=+3904.466101977
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe po busybox -n default
start_stop_delete_test.go:180: (dbg) kubectl --context default-k8s-different-port-20220412201228-42006 describe po busybox -n default:
Name:         busybox
Namespace:    default
Priority:     0
Node:         <none>
Labels:       integration-test=busybox
Annotations:  <none>
Status:       Pending
IP:           
IPs:          <none>
Containers:
busybox:
Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
Port:       <none>
Host Port:  <none>
Command:
sleep
3600
Environment:  <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcrdt (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-bcrdt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age               From               Message
----     ------            ----              ----               -------
Warning  FailedScheduling  45s (x8 over 8m)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 logs busybox -n default
start_stop_delete_test.go:180: (dbg) kubectl --context default-k8s-different-port-20220412201228-42006 logs busybox -n default:
start_stop_delete_test.go:180: wait: integration-test=busybox within 8m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220412201228-42006
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220412201228-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f",
	        "Created": "2022-04-12T20:12:37.404174744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:12:37.803691082Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hosts",
	        "LogPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f-json.log",
	        "Name": "/default-k8s-different-port-20220412201228-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220412201228-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220412201228-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220412201228-42006",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220412201228-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220412201228-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fdc8d9902df162e7f584a615cf1a67a1ddf8a0e7aa58b4c4180e9bac803f9952",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fdc8d9902df1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220412201228-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6642b489f963",
	                        "default-k8s-different-port-20220412201228-42006"
	                    ],
	                    "NetworkID": "e1e5eb80641804e0cf03f9ee1959284f2ec05fd6c94f6b6eb19931fc6032414c",
	                    "EndpointID": "dc02bef0f4abc1393769df835a0a013dde3e78db69d9fafacbeb8f560aaccea3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25: (1.075983482s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | no-preload-20220412200453-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:27 UTC |
	|         | no-preload-20220412200453-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006      | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                                 |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:18:25
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:18:25.862605  293188 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:18:25.862730  293188 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:18:25.862740  293188 out.go:310] Setting ErrFile to fd 2...
	I0412 20:18:25.862745  293188 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:18:25.862852  293188 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:18:25.863116  293188 out.go:304] Setting JSON to false
	I0412 20:18:25.864718  293188 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10859,"bootTime":1649783847,"procs":737,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:18:25.864796  293188 start.go:125] virtualization: kvm guest
	I0412 20:18:25.867632  293188 out.go:176] * [embed-certs-20220412200510-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:18:25.869167  293188 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:18:25.867850  293188 notify.go:193] Checking for updates...
	I0412 20:18:25.870679  293188 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:18:25.872520  293188 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:18:25.874113  293188 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:18:25.875680  293188 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:18:25.876226  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:18:25.876728  293188 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:18:25.920777  293188 docker.go:137] docker version: linux-20.10.14
	I0412 20:18:25.920901  293188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:18:26.018991  293188 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:18:25.951512717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:18:26.019089  293188 docker.go:254] overlay module found
	I0412 20:18:26.021901  293188 out.go:176] * Using the docker driver based on existing profile
	I0412 20:18:26.021929  293188 start.go:284] selected driver: docker
	I0412 20:18:26.021936  293188 start.go:801] validating driver "docker" against &{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> E
xposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:26.022056  293188 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:18:26.022097  293188 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:18:26.022122  293188 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:18:26.023822  293188 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:18:26.024448  293188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:18:26.122834  293188 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:18:26.056644105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:18:26.123002  293188 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:18:26.123035  293188 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:18:26.125282  293188 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:18:26.125414  293188 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:18:26.125443  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:26.125451  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:26.125472  293188 start_flags.go:306] config:
	{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:26.127545  293188 out.go:176] * Starting control plane node embed-certs-20220412200510-42006 in cluster embed-certs-20220412200510-42006
	I0412 20:18:26.127593  293188 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:18:26.129188  293188 out.go:176] * Pulling base image ...
	I0412 20:18:26.129236  293188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:18:26.129274  293188 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:18:26.129311  293188 cache.go:57] Caching tarball of preloaded images
	I0412 20:18:26.129330  293188 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:18:26.129609  293188 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:18:26.129636  293188 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:18:26.129802  293188 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:18:26.175577  293188 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:18:26.175639  293188 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:18:26.175656  293188 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:18:26.175717  293188 start.go:352] acquiring machines lock for embed-certs-20220412200510-42006: {Name:mk64f255895db788ec660fe05e5b2f5e43e4987c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:18:26.175846  293188 start.go:356] acquired machines lock for "embed-certs-20220412200510-42006" in 99.006µs
	I0412 20:18:26.175875  293188 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:18:26.175886  293188 fix.go:55] fixHost starting: 
	I0412 20:18:26.176250  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:18:26.210832  293188 fix.go:103] recreateIfNeeded on embed-certs-20220412200510-42006: state=Stopped err=<nil>
	W0412 20:18:26.210874  293188 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:18:26.213643  293188 out.go:176] * Restarting existing docker container for "embed-certs-20220412200510-42006" ...
	I0412 20:18:26.213726  293188 cli_runner.go:164] Run: docker start embed-certs-20220412200510-42006
	I0412 20:18:26.621467  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:18:26.658142  293188 kic.go:416] container "embed-certs-20220412200510-42006" state is running.
	I0412 20:18:26.658585  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:26.695091  293188 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:18:26.695340  293188 machine.go:88] provisioning docker machine ...
	I0412 20:18:26.695369  293188 ubuntu.go:169] provisioning hostname "embed-certs-20220412200510-42006"
	I0412 20:18:26.695431  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:26.732045  293188 main.go:134] libmachine: Using SSH client type: native
	I0412 20:18:26.732417  293188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0412 20:18:26.732462  293188 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220412200510-42006 && echo "embed-certs-20220412200510-42006" | sudo tee /etc/hostname
	I0412 20:18:26.733264  293188 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34530->127.0.0.1:49432: read: connection reset by peer
	I0412 20:18:29.866005  293188 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220412200510-42006
	
	I0412 20:18:29.866093  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:29.900758  293188 main.go:134] libmachine: Using SSH client type: native
	I0412 20:18:29.900906  293188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0412 20:18:29.900927  293188 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220412200510-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220412200510-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220412200510-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:18:30.024252  293188 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:18:30.024282  293188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:18:30.024338  293188 ubuntu.go:177] setting up certificates
	I0412 20:18:30.024354  293188 provision.go:83] configureAuth start
	I0412 20:18:30.024412  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:30.058758  293188 provision.go:138] copyHostCerts
	I0412 20:18:30.058845  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:18:30.058861  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:18:30.058929  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:18:30.059051  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:18:30.059069  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:18:30.059099  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:18:30.059165  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:18:30.059178  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:18:30.059201  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:18:30.059267  293188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220412200510-42006 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220412200510-42006]
	I0412 20:18:30.297705  293188 provision.go:172] copyRemoteCerts
	I0412 20:18:30.297778  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:18:30.297829  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.332442  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.420873  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:18:30.439067  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:18:30.457093  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0412 20:18:30.475014  293188 provision.go:86] duration metric: configureAuth took 450.644265ms
	I0412 20:18:30.475046  293188 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:18:30.475255  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:18:30.475269  293188 machine.go:91] provisioned docker machine in 3.779914385s
	I0412 20:18:30.475278  293188 start.go:306] post-start starting for "embed-certs-20220412200510-42006" (driver="docker")
	I0412 20:18:30.475291  293188 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:18:30.475347  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:18:30.475392  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.510455  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.600261  293188 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:18:30.603987  293188 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:18:30.604028  293188 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:18:30.604042  293188 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:18:30.604051  293188 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:18:30.604086  293188 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:18:30.604150  293188 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:18:30.604213  293188 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:18:30.604287  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:18:30.611676  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:18:30.630124  293188 start.go:309] post-start completed in 154.824821ms
	I0412 20:18:30.630194  293188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:18:30.630238  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.664427  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.748775  293188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:18:30.752838  293188 fix.go:57] fixHost completed within 4.576944958s
	I0412 20:18:30.752868  293188 start.go:81] releasing machines lock for "embed-certs-20220412200510-42006", held for 4.577006104s
	I0412 20:18:30.752946  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:30.786779  293188 ssh_runner.go:195] Run: systemctl --version
	I0412 20:18:30.786833  293188 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:18:30.786839  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.786895  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.823951  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.826217  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.926862  293188 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:18:30.939004  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:18:30.949472  293188 docker.go:183] disabling docker service ...
	I0412 20:18:30.949536  293188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:18:30.959877  293188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:18:30.969654  293188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:18:31.049568  293188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:18:31.130181  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:18:31.139692  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:18:31.153074  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:18:31.166937  293188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:18:31.173897  293188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:18:31.180575  293188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:18:31.251378  293188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:18:31.325131  293188 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:18:31.325208  293188 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:18:31.329163  293188 start.go:462] Will wait 60s for crictl version
	I0412 20:18:31.329215  293188 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:18:31.354553  293188 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:18:31Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:18:42.402319  293188 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:18:42.427518  293188 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:18:42.427582  293188 ssh_runner.go:195] Run: containerd --version
	I0412 20:18:42.448343  293188 ssh_runner.go:195] Run: containerd --version
	I0412 20:18:42.472811  293188 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:18:42.472913  293188 cli_runner.go:164] Run: docker network inspect embed-certs-20220412200510-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:18:42.506510  293188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0412 20:18:42.510028  293188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:18:39.992607  289404 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0412 20:18:42.522298  293188 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:18:42.522410  293188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:18:42.522486  293188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:18:42.548260  293188 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:18:42.548288  293188 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:18:42.548350  293188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:18:42.573330  293188 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:18:42.573355  293188 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:18:42.573400  293188 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:18:42.597742  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:42.597769  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:42.597782  293188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:18:42.597800  293188 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220412200510-42006 NodeName:embed-certs-20220412200510-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:18:42.597944  293188 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220412200510-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:18:42.598030  293188 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220412200510-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:18:42.598081  293188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:18:42.605494  293188 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:18:42.605604  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:18:42.612680  293188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (577 bytes)
	I0412 20:18:42.626260  293188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:18:42.639600  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0412 20:18:42.653027  293188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:18:42.656044  293188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:18:42.665264  293188 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006 for IP: 192.168.58.2
	I0412 20:18:42.665394  293188 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:18:42.665433  293188 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:18:42.665515  293188 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.key
	I0412 20:18:42.665564  293188 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041
	I0412 20:18:42.665596  293188 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key
	I0412 20:18:42.665720  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:18:42.665758  293188 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:18:42.665772  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:18:42.665799  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:18:42.665824  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:18:42.665847  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:18:42.665883  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:18:42.666420  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:18:42.684961  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:18:42.703505  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:18:42.722170  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:18:42.740728  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:18:42.759411  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:18:42.777909  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:18:42.795814  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:18:42.813492  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:18:42.831827  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:18:42.850182  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:18:42.867975  293188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:18:42.882318  293188 ssh_runner.go:195] Run: openssl version
	I0412 20:18:42.887540  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:18:42.895898  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.899141  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.899202  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.904418  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:18:42.911721  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:18:42.919627  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.922828  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.922889  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.928163  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:18:42.935357  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:18:42.942820  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.945929  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.945976  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.950738  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:18:42.957667  293188 kubeadm.go:391] StartCluster: {Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:42.957775  293188 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:18:42.957819  293188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:18:42.983592  293188 cri.go:87] found id: "45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	I0412 20:18:42.983618  293188 cri.go:87] found id: "99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9"
	I0412 20:18:42.983624  293188 cri.go:87] found id: "1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d"
	I0412 20:18:42.983631  293188 cri.go:87] found id: "3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930"
	I0412 20:18:42.983636  293188 cri.go:87] found id: "3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed"
	I0412 20:18:42.983642  293188 cri.go:87] found id: "e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0"
	I0412 20:18:42.983648  293188 cri.go:87] found id: ""
	I0412 20:18:42.983682  293188 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:18:42.997448  293188 cri.go:114] JSON = null
	W0412 20:18:42.997504  293188 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:18:42.997555  293188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:18:43.004738  293188 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:18:43.004762  293188 kubeadm.go:601] restartCluster start
	I0412 20:18:43.004809  293188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:18:43.012338  293188 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.013058  293188 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220412200510-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:18:43.013376  293188 kubeconfig.go:127] "embed-certs-20220412200510-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:18:43.013929  293188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:18:43.015377  293188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:18:43.022831  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.022901  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.032323  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.232731  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.232839  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.241744  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.433096  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.433175  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.442230  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.632561  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.632636  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.641527  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.832747  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.832833  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.841699  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.032995  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.033117  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.042221  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.232605  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.232679  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.241596  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.432814  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.432898  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.441681  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.633020  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.633115  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.642100  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.833416  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.833505  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.843045  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.033244  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.033372  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.042455  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.232743  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.232829  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.241922  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.433151  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.433234  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.442285  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.632437  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.632580  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.641663  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.833174  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.833254  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.842437  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.032944  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:46.033024  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:46.042136  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.042169  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:46.042209  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:46.050391  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.050420  293188 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:18:46.050427  293188 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:18:46.050443  293188 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:18:46.050494  293188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:18:46.077200  293188 cri.go:87] found id: "45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	I0412 20:18:46.077226  293188 cri.go:87] found id: "99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9"
	I0412 20:18:46.077240  293188 cri.go:87] found id: "1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d"
	I0412 20:18:46.077247  293188 cri.go:87] found id: "3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930"
	I0412 20:18:46.077255  293188 cri.go:87] found id: "3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed"
	I0412 20:18:46.077286  293188 cri.go:87] found id: "e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0"
	I0412 20:18:46.077300  293188 cri.go:87] found id: ""
	I0412 20:18:46.077307  293188 cri.go:232] Stopping containers: [45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae 99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9 1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d 3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930 3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0]
	I0412 20:18:46.077363  293188 ssh_runner.go:195] Run: which crictl
	I0412 20:18:46.080533  293188 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae 99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9 1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d 3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930 3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0
	I0412 20:18:46.108221  293188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:18:46.118944  293188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:18:46.126295  293188 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Apr 12 20:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Apr 12 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 12 20:05 /etc/kubernetes/scheduler.conf
	
	I0412 20:18:46.126355  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:18:46.133414  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:18:46.140348  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:18:46.147289  293188 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.147353  293188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:18:46.153983  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:18:46.160779  293188 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.160847  293188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:18:46.167729  293188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:18:46.174673  293188 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:18:46.174697  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.219984  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.780655  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.916175  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.967869  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:47.020948  293188 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:18:47.021032  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:47.530989  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:48.030856  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:48.530765  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:49.030619  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:49.530473  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:50.030687  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:50.530420  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:51.031271  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:51.530751  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:52.030588  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:52.530431  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:53.031324  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:53.091818  293188 api_server.go:71] duration metric: took 6.07087219s to wait for apiserver process to appear ...
	I0412 20:18:53.091857  293188 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:18:53.091871  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:53.092280  293188 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0412 20:18:53.593049  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:55.985909  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:18:55.985946  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:18:56.093093  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:56.106818  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:56.106855  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:56.593283  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:56.598524  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:56.598552  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:57.093125  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:57.098065  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:57.098143  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:57.593444  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:57.598330  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0412 20:18:57.604742  293188 api_server.go:140] control plane version: v1.23.5
	I0412 20:18:57.604771  293188 api_server.go:130] duration metric: took 4.512906341s to wait for apiserver health ...
	I0412 20:18:57.604785  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:57.604793  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:55.442437  289404 kubeadm.go:752] kubelet initialised
	I0412 20:18:55.442463  289404 kubeadm.go:753] duration metric: took 58.431626455s waiting for restarted kubelet to initialise ...
	I0412 20:18:55.442472  289404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:18:55.446881  289404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" ...
	I0412 20:18:57.452309  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:18:57.607772  293188 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:18:57.607862  293188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:18:57.612047  293188 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:18:57.612106  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:18:57.625606  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:18:58.259688  293188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:18:58.267983  293188 system_pods.go:59] 9 kube-system pods found
	I0412 20:18:58.268016  293188 system_pods.go:61] "coredns-64897985d-zvglg" [d5fab6b5-c460-460f-8cb9-6a8df3a0a493] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268026  293188 system_pods.go:61] "etcd-embed-certs-20220412200510-42006" [f0b1b85a-9a7c-49a3-9c3a-f120f8274f99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:18:58.268033  293188 system_pods.go:61] "kindnet-7f7sj" [059bb69b-b8de-4f71-85b1-8d7391491598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:18:58.268040  293188 system_pods.go:61] "kube-apiserver-embed-certs-20220412200510-42006" [6cfeb71b-0d01-4c67-8a26-edbc213c684f] Running
	I0412 20:18:58.268048  293188 system_pods.go:61] "kube-controller-manager-embed-certs-20220412200510-42006" [726d3fb3-6d83-4325-9328-a407b3bffd34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:18:58.268055  293188 system_pods.go:61] "kube-proxy-6nznr" [aa45eb74-fde3-453a-82ad-e29ae4116d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:18:58.268060  293188 system_pods.go:61] "kube-scheduler-embed-certs-20220412200510-42006" [c03b607f-b4f9-4ff6-8d07-8890c53a7dd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:18:58.268085  293188 system_pods.go:61] "metrics-server-b955d9d8-6cvmp" [cfc4546c-e7eb-4626-af34-9d7382032070] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268094  293188 system_pods.go:61] "storage-provisioner" [c17111bc-be71-4c72-9d44-0de354dc03e1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268110  293188 system_pods.go:74] duration metric: took 8.401782ms to wait for pod list to return data ...
	I0412 20:18:58.268120  293188 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:18:58.270949  293188 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:18:58.270997  293188 node_conditions.go:123] node cpu capacity is 8
	I0412 20:18:58.271013  293188 node_conditions.go:105] duration metric: took 2.882717ms to run NodePressure ...
	I0412 20:18:58.271045  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:58.422028  293188 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:18:58.426575  293188 kubeadm.go:752] kubelet initialised
	I0412 20:18:58.426601  293188 kubeadm.go:753] duration metric: took 4.547593ms waiting for restarted kubelet to initialise ...
	I0412 20:18:58.426610  293188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:18:58.432786  293188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" ...
	I0412 20:19:00.439498  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:18:59.452702  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:01.951942  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:03.952202  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:02.939601  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:05.439254  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:05.952347  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:07.952479  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:07.439551  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:09.939856  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:10.452258  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:12.453023  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:12.439364  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:14.939042  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:14.453080  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:16.952944  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:16.939458  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:19.439708  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:19.452528  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:21.952621  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:23.952660  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:21.938672  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:23.939041  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:25.953037  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:28.452797  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:26.439455  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:28.939098  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:30.952242  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:32.952805  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:30.939386  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:33.439558  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:35.452316  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:37.951759  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:35.939628  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:38.439636  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:39.952865  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:41.952966  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:40.939568  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:43.439290  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:44.451931  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:46.452616  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:48.952981  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:45.938661  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:47.939519  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:50.439960  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:51.452872  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:53.952148  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:52.939629  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:54.941643  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:56.452819  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:58.952504  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:57.438786  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:59.439809  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:01.452181  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:03.952960  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:01.939098  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:03.939221  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:05.953040  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:08.452051  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:05.939416  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:07.939575  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:10.438960  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:10.452446  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:12.452585  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:12.439256  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:14.439328  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:14.952918  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:17.453178  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:16.939000  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:19.438936  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:19.953047  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:22.452913  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:21.439374  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:23.439718  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:25.440229  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:24.952197  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:26.952775  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:27.938777  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:29.939549  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:29.452518  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:31.452773  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:33.951896  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:32.439297  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:34.939290  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:35.952124  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:37.952888  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:36.939443  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:39.439507  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:40.452829  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:42.952723  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:41.939547  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:44.439685  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:45.452682  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:47.952663  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:46.439959  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:48.939551  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:49.952833  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:51.953215  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:51.439298  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:53.939194  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:54.452966  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:56.952662  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:56.439050  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:58.439250  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:59.452894  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:01.452993  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:03.952039  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:00.939359  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:03.439609  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:06.452224  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:08.951951  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:05.938661  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:07.939824  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:10.439218  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:10.952389  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:12.952480  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:12.939504  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:15.439451  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:15.452284  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:17.953019  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:17.939505  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:20.439836  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:20.451991  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:22.452912  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:22.938740  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:24.939630  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:24.952892  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:27.453024  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:27.439712  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:29.939146  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:29.953115  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:32.452095  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:32.439187  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:34.439528  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:34.453190  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:36.952740  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:36.939450  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:39.438925  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:39.453093  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:41.952831  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:41.439158  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:43.440112  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:44.452526  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:46.453025  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:48.952697  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:45.939050  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:47.939118  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:49.939338  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:51.452345  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:53.452917  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:52.439020  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:54.439255  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:55.952397  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:57.952650  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:56.939471  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:59.438970  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:00.451875  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:02.452533  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:01.439410  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:03.439747  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:04.952323  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:06.953080  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:05.939704  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:08.439258  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:09.452783  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:11.452916  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:13.952781  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:10.939241  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:13.439644  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:16.452431  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:18.952125  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:15.939011  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:18.439077  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:20.439255  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:20.953057  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:23.452290  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:22.439645  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:24.938780  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:25.953032  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:28.452613  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:26.939148  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:29.439156  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:30.952045  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:33.453012  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:31.439554  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:33.939040  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:35.952844  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:38.452043  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:36.439185  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:38.939474  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:40.452897  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:42.952703  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:41.439595  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:43.439860  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:44.952775  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:47.452279  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:45.938954  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:48.439103  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:49.452612  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:51.452653  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:53.952226  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:50.939266  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:52.939428  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:55.439627  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:55.449553  289404 pod_ready.go:81] duration metric: took 4m0.002631772s waiting for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" ...
	E0412 20:22:55.449598  289404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:22:55.449626  289404 pod_ready.go:38] duration metric: took 4m0.007144091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:22:55.449665  289404 kubeadm.go:605] restartCluster took 5m9.090565131s
	W0412 20:22:55.449859  289404 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:22:55.449901  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:22:56.788407  289404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.338480882s)
	I0412 20:22:56.788465  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:22:56.798571  289404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:22:56.806252  289404 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:22:56.806310  289404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:22:56.814094  289404 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:22:56.814147  289404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:22:57.205705  289404 out.go:203]   - Generating certificates and keys ...
	I0412 20:22:57.761892  289404 out.go:203]   - Booting up control plane ...
	I0412 20:22:57.939670  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:58.435823  293188 pod_ready.go:81] duration metric: took 4m0.002987778s waiting for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" ...
	E0412 20:22:58.435854  293188 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:22:58.435889  293188 pod_ready.go:38] duration metric: took 4m0.00926918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:22:58.435924  293188 kubeadm.go:605] restartCluster took 4m15.431156944s
	W0412 20:22:58.436101  293188 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:22:58.436140  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:23:00.308017  293188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.871849788s)
	I0412 20:23:00.308112  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:00.320139  293188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:23:00.327966  293188 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:23:00.328042  293188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:23:00.336326  293188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:23:00.336368  293188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:23:00.611970  293188 out.go:203]   - Generating certificates and keys ...
	I0412 20:23:01.168395  293188 out.go:203]   - Booting up control plane ...
	I0412 20:23:06.805594  289404 out.go:203]   - Configuring RBAC rules ...
	I0412 20:23:07.228571  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:23:07.228608  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:23:07.230875  289404 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:23:07.230960  289404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:23:07.235577  289404 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:23:07.235606  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:23:07.249805  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:23:07.476958  289404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:23:07.477058  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:07.477062  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=old-k8s-version-20220412200421-42006 minikube.k8s.io/updated_at=2022_04_12T20_23_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:07.617207  289404 ops.go:34] apiserver oom_adj: -16
	I0412 20:23:07.617401  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:08.195772  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:08.695638  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:09.196205  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.717153  293188 out.go:203]   - Configuring RBAC rules ...
	I0412 20:23:13.131342  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:23:13.131368  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:23:09.695425  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:10.195930  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:10.695954  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:11.195633  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:11.695826  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.195852  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.696130  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.195253  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.696165  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.196144  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.133726  293188 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:23:13.133819  293188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:23:13.137703  293188 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:23:13.137723  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:23:13.151266  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:23:13.779496  293188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:23:13.779592  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.779602  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=embed-certs-20220412200510-42006 minikube.k8s.io/updated_at=2022_04_12T20_23_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.844319  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.844349  293188 ops.go:34] apiserver oom_adj: -16
	I0412 20:23:14.416398  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.915875  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.416596  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.695253  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.195150  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.695415  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.195943  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.695835  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.196122  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.695700  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.195147  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.695398  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.195516  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.916799  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.416204  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.916796  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.416351  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.916642  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.416704  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.916121  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.415863  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.915946  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.416316  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.695272  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.195231  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.695839  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.196042  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.695436  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.195840  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.265152  289404 kubeadm.go:1020] duration metric: took 14.788147094s to wait for elevateKubeSystemPrivileges.
	I0412 20:23:22.265190  289404 kubeadm.go:393] StartCluster complete in 5m35.954640439s
	I0412 20:23:22.265216  289404 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:22.265344  289404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:23:22.266642  289404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:22.781755  289404 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220412200421-42006" rescaled to 1
	I0412 20:23:22.781838  289404 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:23:22.784342  289404 out.go:176] * Verifying Kubernetes components...
	I0412 20:23:22.784399  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:22.781888  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:23:22.781911  289404 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:23:22.784549  289404 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784574  289404 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.784587  289404 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:23:22.784588  289404 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784607  289404 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.782092  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:23:22.784626  289404 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784639  289404 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784643  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	W0412 20:23:22.784654  289404 addons.go:165] addon metrics-server should already be in state true
	I0412 20:23:22.784604  289404 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784699  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.784706  289404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.784622  289404 addons.go:165] addon dashboard should already be in state true
	I0412 20:23:22.784854  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.784998  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785175  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785177  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785289  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.834905  289404 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:23:22.834967  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:23:22.834839  289404 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:23:22.834976  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:23:22.835108  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.835109  289404 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:22.835146  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:23:22.835197  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.840482  289404 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.840512  289404 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:23:22.840547  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.841070  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.843111  289404 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:23:22.844712  289404 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:23:22.844786  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:23:22.844804  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:23:22.844869  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.883155  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.885010  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.885724  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.891532  289404 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:22.891561  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:23:22.891613  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.894872  289404 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:23:22.894917  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:23:22.941013  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:23.009112  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:23:23.009152  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:23:23.017044  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:23:23.017070  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:23:23.087289  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:23:23.087324  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:23:23.098845  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:23.100553  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:23:23.100586  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:23:23.180997  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:23:23.181029  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:23:23.199679  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:23:23.199710  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:23:23.200117  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:23.200143  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:23:23.216261  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:23.217306  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:23:23.217335  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:23:23.293044  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:23.296386  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:23:23.296416  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:23:23.381958  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:23:23.381988  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:23:23.400957  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:23:23.400986  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:23:23.404306  289404 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0412 20:23:23.485207  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:23.485240  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:23:23.501224  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:24.002810  289404 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:20.916222  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.416859  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.916573  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.415915  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.915956  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:23.416356  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:23.916733  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:24.415894  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:24.916772  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:25.416205  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:25.916674  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.416183  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.916867  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.975833  293188 kubeadm.go:1020] duration metric: took 13.196293095s to wait for elevateKubeSystemPrivileges.
	I0412 20:23:26.975874  293188 kubeadm.go:393] StartCluster complete in 4m44.018219722s
	I0412 20:23:26.975896  293188 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:26.976012  293188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:23:26.978211  293188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:27.500701  293188 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220412200510-42006" rescaled to 1
	I0412 20:23:27.500763  293188 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:23:27.503023  293188 out.go:176] * Verifying Kubernetes components...
	I0412 20:23:27.500837  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:23:27.503093  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:27.500871  293188 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:23:27.503173  293188 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.501024  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:23:27.503205  293188 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503209  293188 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503216  293188 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220412200510-42006"
	I0412 20:23:27.503190  293188 addons.go:65] Setting dashboard=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503256  293188 addons.go:153] Setting addon dashboard=true in "embed-certs-20220412200510-42006"
	I0412 20:23:27.503196  293188 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220412200510-42006"
	W0412 20:23:27.503276  293188 addons.go:165] addon dashboard should already be in state true
	W0412 20:23:27.503282  293188 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:23:27.503325  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.503325  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	W0412 20:23:27.503229  293188 addons.go:165] addon metrics-server should already be in state true
	I0412 20:23:27.503228  293188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220412200510-42006"
	I0412 20:23:27.503589  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.503804  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.503948  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.503973  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.504031  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.516146  293188 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:23:27.550686  293188 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:23:27.550784  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:23:27.550803  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:23:27.550859  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.556204  293188 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:23:27.556346  293188 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:27.556362  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:23:27.556409  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.560689  293188 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220412200510-42006"
	W0412 20:23:27.560742  293188 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:23:27.560776  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.561846  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.563827  293188 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:23:27.566302  293188 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:23:27.566378  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:23:27.566390  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:23:27.566448  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.595498  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:23:27.598031  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.600994  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.616248  293188 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:27.616282  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:23:27.616343  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.627801  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.656490  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.738871  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:27.787800  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:23:27.787831  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:23:27.791933  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:23:27.791958  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:23:27.797765  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:27.803394  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:23:27.803425  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:23:27.808640  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:23:27.808666  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:23:27.892163  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:27.892195  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:23:27.896562  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:23:27.896592  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:23:27.901548  293188 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0412 20:23:27.979768  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:27.980178  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:23:27.980200  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:23:28.001603  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:23:28.001637  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:23:28.086251  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:23:28.086331  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:23:28.102562  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:23:28.102631  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:23:28.179329  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:23:28.179360  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:23:28.201845  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:28.201898  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:23:28.292511  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:28.699642  293188 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220412200510-42006"
	I0412 20:23:24.323632  289404 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:23:24.323662  289404 addons.go:417] enableAddons completed in 1.541765473s
	I0412 20:23:24.904515  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:26.904888  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:28.905738  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:29.110155  293188 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0412 20:23:29.110184  293188 addons.go:417] enableAddons completed in 1.609328567s
	I0412 20:23:29.529851  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:31.405317  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:33.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:32.030061  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:34.030385  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:35.905005  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:37.905698  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:36.529738  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:39.029385  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:40.405606  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:42.904575  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:41.030287  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:43.030360  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:45.530065  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:44.904640  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:46.905176  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:47.530314  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:49.530546  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:49.405163  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:51.405698  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:53.904569  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:52.030189  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:54.529461  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:55.904874  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:58.404720  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:56.530043  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:59.029436  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:00.405668  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:02.905328  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:01.029972  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:03.530117  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:05.530287  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:05.404966  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:07.905041  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:08.029993  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:10.529708  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:10.405806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:12.905494  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:12.530227  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:15.030365  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:15.404546  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:17.405765  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:17.529883  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:20.030387  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:19.905315  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:22.405755  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:22.529841  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:25.029353  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:24.904584  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:27.405712  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:27.029951  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:29.529761  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:29.905343  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:31.905574  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:31.529947  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:34.029808  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:34.404690  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:36.405661  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:38.905176  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:36.030055  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:38.529175  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:40.529796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:41.405438  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:43.905150  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:43.030151  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:45.529652  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:45.905189  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:48.405669  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:47.530080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:50.029611  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:50.905152  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:53.404952  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:52.029988  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:54.529864  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:55.905884  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:58.404742  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:56.530329  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:59.030173  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:00.904714  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:02.905539  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:01.529575  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:03.529634  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:05.530147  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:05.404703  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:07.404929  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:08.030263  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:10.529544  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:09.904642  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:11.905009  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:12.529795  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:15.029585  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:14.405260  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:16.405707  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:18.904489  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:17.029751  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:19.529776  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:20.905048  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:22.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:22.030036  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:24.030201  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9833ae46466cc       6de166512aa22       27 seconds ago      Exited              kindnet-cni               7                   eac241d106cdd
	e86db06fb9ce1       3c53fa8541f95       12 minutes ago      Running             kube-proxy                0                   484376a2ef747
	51def5f5fb57c       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   fceaa872be874
	3c8657a1a5932       884d49d6d8c9f       12 minutes ago      Running             kube-scheduler            0                   ac91422e769ae
	1032ec9dc604b       3fc1d62d65872       12 minutes ago      Running             kube-apiserver            0                   c698f24911d58
	71af7fb31571e       b0c9e5e4dbb14       12 minutes ago      Running             kube-controller-manager   0                   32d426a8d8c0a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:12:38 UTC, end at Tue 2022-04-12 20:25:26 UTC. --
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.123044477Z" level=warning msg="cleaning up after shim disconnected" id=9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f namespace=k8s.io
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.123061425Z" level=info msg="cleaning up dead shim"
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.134824337Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2420\n"
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:05.135559073Z" level=info msg="RemoveContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\""
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:05.141079170Z" level=info msg="RemoveContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\" returns successfully"
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.682416527Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.695536647Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\""
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.696157108Z" level=info msg="StartContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\""
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.798908010Z" level=info msg="StartContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\" returns successfully"
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.114093174Z" level=info msg="shim disconnected" id=ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.114150634Z" level=warning msg="cleaning up after shim disconnected" id=ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb namespace=k8s.io
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.114159870Z" level=info msg="cleaning up dead shim"
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.125285686Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:19:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2758\n"
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.437004065Z" level=info msg="RemoveContainer for \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\""
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.441491725Z" level=info msg="RemoveContainer for \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\" returns successfully"
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.682603030Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.696164115Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63\""
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.696772681Z" level=info msg="StartContainer for \"9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63\""
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.885508636Z" level=info msg="StartContainer for \"9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63\" returns successfully"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.124126461Z" level=info msg="shim disconnected" id=9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.124200062Z" level=warning msg="cleaning up after shim disconnected" id=9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 namespace=k8s.io
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.124208947Z" level=info msg="cleaning up dead shim"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.134960427Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:25:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2863\n"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.961243494Z" level=info msg="RemoveContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\""
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.966172554Z" level=info msg="RemoveContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220412201228-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220412201228-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_13_10_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:13:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220412201228-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:25:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220412201228-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ef825856-4086-4c06-9629-95bede787d92
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220412201228-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-852v4                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220412201228-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220412201228-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nfsgp                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220412201228-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646] <==
	* {"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220412201228-42006 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:13:04.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-04-12T20:23:04.101Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":585}
	{"level":"info","ts":"2022-04-12T20:23:04.102Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":585,"took":"660.807µs"}
	
	* 
	* ==> kernel <==
	*  20:25:26 up  3:07,  0 users,  load average: 0.48, 0.85, 1.17
	Linux default-k8s-different-port-20220412201228-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c] <==
	* I0412 20:13:06.378891       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:13:06.380058       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0412 20:13:06.380365       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0412 20:13:06.380526       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 20:13:06.380660       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 20:13:06.389081       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 20:13:07.223083       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:13:07.223129       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:13:07.227737       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 20:13:07.231059       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:13:07.231090       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 20:13:07.640851       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:13:07.682744       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 20:13:07.805024       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 20:13:07.813172       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0412 20:13:07.814261       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:13:07.818683       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:13:08.360879       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:13:09.411225       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:13:09.419785       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:13:09.431828       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:13:14.599758       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:13:21.818265       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:13:21.968747       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:13:22.481492       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda] <==
	* I0412 20:13:21.215625       1 shared_informer.go:247] Caches are synced for taint 
	I0412 20:13:21.215676       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0412 20:13:21.215694       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0412 20:13:21.215761       1 node_lifecycle_controller.go:1012] Missing timestamp for Node default-k8s-different-port-20220412201228-42006. Assuming now as a timestamp.
	I0412 20:13:21.215805       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0412 20:13:21.215859       1 event.go:294] "Event occurred" object="default-k8s-different-port-20220412201228-42006" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node default-k8s-different-port-20220412201228-42006 event: Registered Node default-k8s-different-port-20220412201228-42006 in Controller"
	I0412 20:13:21.229490       1 shared_informer.go:247] Caches are synced for deployment 
	I0412 20:13:21.315704       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0412 20:13:21.360412       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 20:13:21.360445       1 disruption.go:371] Sending events to api server.
	I0412 20:13:21.368497       1 shared_informer.go:247] Caches are synced for HPA 
	I0412 20:13:21.385835       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:13:21.400192       1 shared_informer.go:247] Caches are synced for endpoint 
	I0412 20:13:21.411344       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0412 20:13:21.424347       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:13:21.821606       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:13:21.821636       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:13:21.825308       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-852v4"
	I0412 20:13:21.825372       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nfsgp"
	I0412 20:13:21.839671       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:13:21.971282       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 20:13:22.044641       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 20:13:22.121317       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rmqrj"
	I0412 20:13:22.126350       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-c2gzm"
	I0412 20:13:22.145463       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-rmqrj"
	
	* 
	* ==> kube-proxy [e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848] <==
	* I0412 20:13:22.455007       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0412 20:13:22.455073       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0412 20:13:22.455117       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:13:22.478285       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:13:22.478320       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:13:22.478326       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:13:22.478350       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:13:22.478788       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:13:22.479353       1 config.go:317] "Starting service config controller"
	I0412 20:13:22.479385       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:13:22.479423       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:13:22.479433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:13:22.579611       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:13:22.579633       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd] <==
	* W0412 20:13:06.388989       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:13:06.389007       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:13:06.389014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:13:06.389018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:13:06.389730       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:13:06.389771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:13:07.206657       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:13:07.206707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0412 20:13:07.265873       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:13:07.265925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:13:07.296201       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:13:07.296245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:13:07.302602       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.302649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.338917       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:13:07.338952       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:13:07.341982       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.342023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.427305       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.427338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.446555       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:13:07.446595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:13:07.468839       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.468878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0412 20:13:07.903442       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:12:38 UTC, end at Tue 2022-04-12 20:25:27 UTC. --
	Apr 12 20:24:24 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:24.952570    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:25 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:25.680490    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:25 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:25.680818    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:24:29 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:29.953457    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:34 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:34.955121    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:36 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:36.680374    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:36 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:36.680705    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:24:39 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:39.956431    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:44 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:44.957716    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:47 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:47.679522    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:47 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:47.679812    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:24:49 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:49.959315    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:54 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:54.960793    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:58.680293    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:59 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:59.962391    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:04 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:04.963546    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:25:09.960222    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:25:09.960600    1290 scope.go:110] "RemoveContainer" containerID="9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:09.960988    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:09.964994    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:14 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:14.966231    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:19 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:19.967531    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:22 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:25:22.679628    1290 scope.go:110] "RemoveContainer" containerID="9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	Apr 12 20:25:22 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:22.680062    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:25:24 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:24.968592    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-c2gzm storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe pod busybox coredns-64897985d-c2gzm storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod busybox coredns-64897985d-c2gzm storage-provisioner: exit status 1 (60.796821ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcrdt (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bcrdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  47s (x8 over 8m2s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-c2gzm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod busybox coredns-64897985d-c2gzm storage-provisioner: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220412201228-42006
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220412201228-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f",
	        "Created": "2022-04-12T20:12:37.404174744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274647,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:12:37.803691082Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hosts",
	        "LogPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f-json.log",
	        "Name": "/default-k8s-different-port-20220412201228-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220412201228-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220412201228-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220412201228-42006",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220412201228-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220412201228-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fdc8d9902df162e7f584a615cf1a67a1ddf8a0e7aa58b4c4180e9bac803f9952",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49412"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49411"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49408"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49410"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49409"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/fdc8d9902df1",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220412201228-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6642b489f963",
	                        "default-k8s-different-port-20220412201228-42006"
	                    ],
	                    "NetworkID": "e1e5eb80641804e0cf03f9ee1959284f2ec05fd6c94f6b6eb19931fc6032414c",
	                    "EndpointID": "dc02bef0f4abc1393769df835a0a013dde3e78db69d9fafacbeb8f560aaccea3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/DeployApp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                         | disable-driver-mounts-20220412201227-42006      | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:27 UTC | Tue, 12 Apr 2022 20:12:28 UTC |
	|         | disable-driver-mounts-20220412201227-42006                 |                                                 |         |         |                               |                               |
	| -p      | bridge-20220412195202-42006                                | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:49 UTC | Tue, 12 Apr 2022 20:12:50 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| delete  | -p bridge-20220412195202-42006                             | bridge-20220412195202-42006                     | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:50 UTC | Tue, 12 Apr 2022 20:12:53 UTC |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:12:53 UTC | Tue, 12 Apr 2022 20:13:47 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:18:25
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:18:25.862605  293188 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:18:25.862730  293188 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:18:25.862740  293188 out.go:310] Setting ErrFile to fd 2...
	I0412 20:18:25.862745  293188 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:18:25.862852  293188 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:18:25.863116  293188 out.go:304] Setting JSON to false
	I0412 20:18:25.864718  293188 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10859,"bootTime":1649783847,"procs":737,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:18:25.864796  293188 start.go:125] virtualization: kvm guest
	I0412 20:18:25.867632  293188 out.go:176] * [embed-certs-20220412200510-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:18:25.869167  293188 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:18:25.867850  293188 notify.go:193] Checking for updates...
	I0412 20:18:25.870679  293188 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:18:25.872520  293188 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:18:25.874113  293188 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:18:25.875680  293188 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:18:25.876226  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:18:25.876728  293188 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:18:25.920777  293188 docker.go:137] docker version: linux-20.10.14
	I0412 20:18:25.920901  293188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:18:26.018991  293188 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:18:25.951512717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:18:26.019089  293188 docker.go:254] overlay module found
	I0412 20:18:26.021901  293188 out.go:176] * Using the docker driver based on existing profile
	I0412 20:18:26.021929  293188 start.go:284] selected driver: docker
	I0412 20:18:26.021936  293188 start.go:801] validating driver "docker" against &{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> E
xposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:26.022056  293188 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:18:26.022097  293188 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:18:26.022122  293188 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:18:26.023822  293188 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:18:26.024448  293188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:18:26.122834  293188 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:18:26.056644105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:18:26.123002  293188 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:18:26.123035  293188 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:18:26.125282  293188 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:18:26.125414  293188 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:18:26.125443  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:26.125451  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:26.125472  293188 start_flags.go:306] config:
	{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:26.127545  293188 out.go:176] * Starting control plane node embed-certs-20220412200510-42006 in cluster embed-certs-20220412200510-42006
	I0412 20:18:26.127593  293188 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:18:26.129188  293188 out.go:176] * Pulling base image ...
	I0412 20:18:26.129236  293188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:18:26.129274  293188 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:18:26.129311  293188 cache.go:57] Caching tarball of preloaded images
	I0412 20:18:26.129330  293188 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:18:26.129609  293188 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:18:26.129636  293188 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:18:26.129802  293188 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:18:26.175577  293188 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:18:26.175639  293188 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:18:26.175656  293188 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:18:26.175717  293188 start.go:352] acquiring machines lock for embed-certs-20220412200510-42006: {Name:mk64f255895db788ec660fe05e5b2f5e43e4987c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:18:26.175846  293188 start.go:356] acquired machines lock for "embed-certs-20220412200510-42006" in 99.006µs
	I0412 20:18:26.175875  293188 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:18:26.175886  293188 fix.go:55] fixHost starting: 
	I0412 20:18:26.176250  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:18:26.210832  293188 fix.go:103] recreateIfNeeded on embed-certs-20220412200510-42006: state=Stopped err=<nil>
	W0412 20:18:26.210874  293188 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:18:26.213643  293188 out.go:176] * Restarting existing docker container for "embed-certs-20220412200510-42006" ...
	I0412 20:18:26.213726  293188 cli_runner.go:164] Run: docker start embed-certs-20220412200510-42006
	I0412 20:18:26.621467  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:18:26.658142  293188 kic.go:416] container "embed-certs-20220412200510-42006" state is running.
	I0412 20:18:26.658585  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:26.695091  293188 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:18:26.695340  293188 machine.go:88] provisioning docker machine ...
	I0412 20:18:26.695369  293188 ubuntu.go:169] provisioning hostname "embed-certs-20220412200510-42006"
	I0412 20:18:26.695431  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:26.732045  293188 main.go:134] libmachine: Using SSH client type: native
	I0412 20:18:26.732417  293188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0412 20:18:26.732462  293188 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220412200510-42006 && echo "embed-certs-20220412200510-42006" | sudo tee /etc/hostname
	I0412 20:18:26.733264  293188 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34530->127.0.0.1:49432: read: connection reset by peer
	I0412 20:18:29.866005  293188 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220412200510-42006
	
	I0412 20:18:29.866093  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:29.900758  293188 main.go:134] libmachine: Using SSH client type: native
	I0412 20:18:29.900906  293188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0412 20:18:29.900927  293188 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220412200510-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220412200510-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220412200510-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:18:30.024252  293188 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:18:30.024282  293188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:18:30.024338  293188 ubuntu.go:177] setting up certificates
	I0412 20:18:30.024354  293188 provision.go:83] configureAuth start
	I0412 20:18:30.024412  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:30.058758  293188 provision.go:138] copyHostCerts
	I0412 20:18:30.058845  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:18:30.058861  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:18:30.058929  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:18:30.059051  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:18:30.059069  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:18:30.059099  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:18:30.059165  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:18:30.059178  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:18:30.059201  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:18:30.059267  293188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220412200510-42006 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220412200510-42006]
	I0412 20:18:30.297705  293188 provision.go:172] copyRemoteCerts
	I0412 20:18:30.297778  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:18:30.297829  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.332442  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.420873  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:18:30.439067  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:18:30.457093  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0412 20:18:30.475014  293188 provision.go:86] duration metric: configureAuth took 450.644265ms
	I0412 20:18:30.475046  293188 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:18:30.475255  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:18:30.475269  293188 machine.go:91] provisioned docker machine in 3.779914385s
	I0412 20:18:30.475278  293188 start.go:306] post-start starting for "embed-certs-20220412200510-42006" (driver="docker")
	I0412 20:18:30.475291  293188 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:18:30.475347  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:18:30.475392  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.510455  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.600261  293188 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:18:30.603987  293188 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:18:30.604028  293188 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:18:30.604042  293188 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:18:30.604051  293188 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:18:30.604086  293188 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:18:30.604150  293188 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:18:30.604213  293188 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:18:30.604287  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:18:30.611676  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:18:30.630124  293188 start.go:309] post-start completed in 154.824821ms
	I0412 20:18:30.630194  293188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:18:30.630238  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.664427  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.748775  293188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:18:30.752838  293188 fix.go:57] fixHost completed within 4.576944958s
	I0412 20:18:30.752868  293188 start.go:81] releasing machines lock for "embed-certs-20220412200510-42006", held for 4.577006104s
	I0412 20:18:30.752946  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:30.786779  293188 ssh_runner.go:195] Run: systemctl --version
	I0412 20:18:30.786833  293188 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:18:30.786839  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.786895  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.823951  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.826217  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.926862  293188 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:18:30.939004  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:18:30.949472  293188 docker.go:183] disabling docker service ...
	I0412 20:18:30.949536  293188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:18:30.959877  293188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:18:30.969654  293188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:18:31.049568  293188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:18:31.130181  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:18:31.139692  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:18:31.153074  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:18:31.166937  293188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:18:31.173897  293188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:18:31.180575  293188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:18:31.251378  293188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:18:31.325131  293188 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:18:31.325208  293188 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:18:31.329163  293188 start.go:462] Will wait 60s for crictl version
	I0412 20:18:31.329215  293188 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:18:31.354553  293188 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:18:31Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:18:42.402319  293188 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:18:42.427518  293188 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:18:42.427582  293188 ssh_runner.go:195] Run: containerd --version
	I0412 20:18:42.448343  293188 ssh_runner.go:195] Run: containerd --version
	I0412 20:18:42.472811  293188 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:18:42.472913  293188 cli_runner.go:164] Run: docker network inspect embed-certs-20220412200510-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:18:42.506510  293188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0412 20:18:42.510028  293188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:18:39.992607  289404 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0412 20:18:42.522298  293188 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:18:42.522410  293188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:18:42.522486  293188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:18:42.548260  293188 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:18:42.548288  293188 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:18:42.548350  293188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:18:42.573330  293188 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:18:42.573355  293188 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:18:42.573400  293188 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:18:42.597742  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:42.597769  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:42.597782  293188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:18:42.597800  293188 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220412200510-42006 NodeName:embed-certs-20220412200510-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:18:42.597944  293188 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220412200510-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:18:42.598030  293188 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220412200510-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:18:42.598081  293188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:18:42.605494  293188 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:18:42.605604  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:18:42.612680  293188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (577 bytes)
	I0412 20:18:42.626260  293188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:18:42.639600  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0412 20:18:42.653027  293188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:18:42.656044  293188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:18:42.665264  293188 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006 for IP: 192.168.58.2
	I0412 20:18:42.665394  293188 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:18:42.665433  293188 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:18:42.665515  293188 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.key
	I0412 20:18:42.665564  293188 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041
	I0412 20:18:42.665596  293188 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key
	I0412 20:18:42.665720  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:18:42.665758  293188 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:18:42.665772  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:18:42.665799  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:18:42.665824  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:18:42.665847  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:18:42.665883  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:18:42.666420  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:18:42.684961  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:18:42.703505  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:18:42.722170  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:18:42.740728  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:18:42.759411  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:18:42.777909  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:18:42.795814  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:18:42.813492  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:18:42.831827  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:18:42.850182  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:18:42.867975  293188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:18:42.882318  293188 ssh_runner.go:195] Run: openssl version
	I0412 20:18:42.887540  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:18:42.895898  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.899141  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.899202  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.904418  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:18:42.911721  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:18:42.919627  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.922828  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.922889  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.928163  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:18:42.935357  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:18:42.942820  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.945929  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.945976  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.950738  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:18:42.957667  293188 kubeadm.go:391] StartCluster: {Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:42.957775  293188 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:18:42.957819  293188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:18:42.983592  293188 cri.go:87] found id: "45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	I0412 20:18:42.983618  293188 cri.go:87] found id: "99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9"
	I0412 20:18:42.983624  293188 cri.go:87] found id: "1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d"
	I0412 20:18:42.983631  293188 cri.go:87] found id: "3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930"
	I0412 20:18:42.983636  293188 cri.go:87] found id: "3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed"
	I0412 20:18:42.983642  293188 cri.go:87] found id: "e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0"
	I0412 20:18:42.983648  293188 cri.go:87] found id: ""
	I0412 20:18:42.983682  293188 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:18:42.997448  293188 cri.go:114] JSON = null
	W0412 20:18:42.997504  293188 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:18:42.997555  293188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:18:43.004738  293188 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:18:43.004762  293188 kubeadm.go:601] restartCluster start
	I0412 20:18:43.004809  293188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:18:43.012338  293188 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.013058  293188 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220412200510-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:18:43.013376  293188 kubeconfig.go:127] "embed-certs-20220412200510-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:18:43.013929  293188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:18:43.015377  293188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:18:43.022831  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.022901  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.032323  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.232731  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.232839  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.241744  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.433096  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.433175  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.442230  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.632561  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.632636  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.641527  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.832747  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.832833  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.841699  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.032995  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.033117  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.042221  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.232605  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.232679  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.241596  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.432814  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.432898  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.441681  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.633020  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.633115  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.642100  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.833416  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.833505  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.843045  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.033244  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.033372  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.042455  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.232743  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.232829  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.241922  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.433151  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.433234  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.442285  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.632437  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.632580  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.641663  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.833174  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.833254  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.842437  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.032944  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:46.033024  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:46.042136  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.042169  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:46.042209  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:46.050391  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.050420  293188 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:18:46.050427  293188 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:18:46.050443  293188 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:18:46.050494  293188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:18:46.077200  293188 cri.go:87] found id: "45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	I0412 20:18:46.077226  293188 cri.go:87] found id: "99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9"
	I0412 20:18:46.077240  293188 cri.go:87] found id: "1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d"
	I0412 20:18:46.077247  293188 cri.go:87] found id: "3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930"
	I0412 20:18:46.077255  293188 cri.go:87] found id: "3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed"
	I0412 20:18:46.077286  293188 cri.go:87] found id: "e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0"
	I0412 20:18:46.077300  293188 cri.go:87] found id: ""
	I0412 20:18:46.077307  293188 cri.go:232] Stopping containers: [45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae 99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9 1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d 3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930 3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0]
	I0412 20:18:46.077363  293188 ssh_runner.go:195] Run: which crictl
	I0412 20:18:46.080533  293188 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae 99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9 1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d 3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930 3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0
	I0412 20:18:46.108221  293188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:18:46.118944  293188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:18:46.126295  293188 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Apr 12 20:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Apr 12 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 12 20:05 /etc/kubernetes/scheduler.conf
	
	I0412 20:18:46.126355  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:18:46.133414  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:18:46.140348  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:18:46.147289  293188 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.147353  293188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:18:46.153983  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:18:46.160779  293188 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.160847  293188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:18:46.167729  293188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:18:46.174673  293188 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:18:46.174697  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.219984  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.780655  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.916175  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.967869  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:47.020948  293188 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:18:47.021032  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:47.530989  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:48.030856  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:48.530765  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:49.030619  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:49.530473  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:50.030687  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:50.530420  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:51.031271  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:51.530751  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:52.030588  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:52.530431  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:53.031324  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:53.091818  293188 api_server.go:71] duration metric: took 6.07087219s to wait for apiserver process to appear ...
	I0412 20:18:53.091857  293188 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:18:53.091871  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:53.092280  293188 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0412 20:18:53.593049  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:55.985909  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:18:55.985946  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:18:56.093093  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:56.106818  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:56.106855  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:56.593283  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:56.598524  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:56.598552  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:57.093125  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:57.098065  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:57.098143  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:57.593444  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:57.598330  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0412 20:18:57.604742  293188 api_server.go:140] control plane version: v1.23.5
	I0412 20:18:57.604771  293188 api_server.go:130] duration metric: took 4.512906341s to wait for apiserver health ...
	I0412 20:18:57.604785  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:57.604793  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:55.442437  289404 kubeadm.go:752] kubelet initialised
	I0412 20:18:55.442463  289404 kubeadm.go:753] duration metric: took 58.431626455s waiting for restarted kubelet to initialise ...
	I0412 20:18:55.442472  289404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:18:55.446881  289404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" ...
	I0412 20:18:57.452309  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:18:57.607772  293188 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:18:57.607862  293188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:18:57.612047  293188 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:18:57.612106  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:18:57.625606  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:18:58.259688  293188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:18:58.267983  293188 system_pods.go:59] 9 kube-system pods found
	I0412 20:18:58.268016  293188 system_pods.go:61] "coredns-64897985d-zvglg" [d5fab6b5-c460-460f-8cb9-6a8df3a0a493] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268026  293188 system_pods.go:61] "etcd-embed-certs-20220412200510-42006" [f0b1b85a-9a7c-49a3-9c3a-f120f8274f99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:18:58.268033  293188 system_pods.go:61] "kindnet-7f7sj" [059bb69b-b8de-4f71-85b1-8d7391491598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:18:58.268040  293188 system_pods.go:61] "kube-apiserver-embed-certs-20220412200510-42006" [6cfeb71b-0d01-4c67-8a26-edbc213c684f] Running
	I0412 20:18:58.268048  293188 system_pods.go:61] "kube-controller-manager-embed-certs-20220412200510-42006" [726d3fb3-6d83-4325-9328-a407b3bffd34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:18:58.268055  293188 system_pods.go:61] "kube-proxy-6nznr" [aa45eb74-fde3-453a-82ad-e29ae4116d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:18:58.268060  293188 system_pods.go:61] "kube-scheduler-embed-certs-20220412200510-42006" [c03b607f-b4f9-4ff6-8d07-8890c53a7dd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:18:58.268085  293188 system_pods.go:61] "metrics-server-b955d9d8-6cvmp" [cfc4546c-e7eb-4626-af34-9d7382032070] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268094  293188 system_pods.go:61] "storage-provisioner" [c17111bc-be71-4c72-9d44-0de354dc03e1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268110  293188 system_pods.go:74] duration metric: took 8.401782ms to wait for pod list to return data ...
	I0412 20:18:58.268120  293188 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:18:58.270949  293188 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:18:58.270997  293188 node_conditions.go:123] node cpu capacity is 8
	I0412 20:18:58.271013  293188 node_conditions.go:105] duration metric: took 2.882717ms to run NodePressure ...
	I0412 20:18:58.271045  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:58.422028  293188 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:18:58.426575  293188 kubeadm.go:752] kubelet initialised
	I0412 20:18:58.426601  293188 kubeadm.go:753] duration metric: took 4.547593ms waiting for restarted kubelet to initialise ...
	I0412 20:18:58.426610  293188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:18:58.432786  293188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" ...
	I0412 20:19:00.439498  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:18:59.452702  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:01.951942  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:03.952202  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:02.939601  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:05.439254  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:05.952347  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:07.952479  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:07.439551  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:09.939856  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:10.452258  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:12.453023  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:12.439364  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:14.939042  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:14.453080  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:16.952944  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:16.939458  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:19.439708  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:19.452528  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:21.952621  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:23.952660  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:21.938672  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:23.939041  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:25.953037  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:28.452797  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:26.439455  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:28.939098  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:30.952242  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:32.952805  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:30.939386  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:33.439558  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:35.452316  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:37.951759  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:35.939628  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:38.439636  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:39.952865  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:41.952966  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:40.939568  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:43.439290  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:44.451931  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:46.452616  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:48.952981  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:45.938661  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:47.939519  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:50.439960  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:51.452872  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:53.952148  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:52.939629  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:54.941643  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:56.452819  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:58.952504  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:57.438786  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:59.439809  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:01.452181  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:03.952960  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:01.939098  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:03.939221  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:05.953040  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:08.452051  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:05.939416  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:07.939575  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:10.438960  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:10.452446  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:12.452585  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:12.439256  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:14.439328  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:14.952918  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:17.453178  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:16.939000  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:19.438936  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:19.953047  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:22.452913  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:21.439374  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:23.439718  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:25.440229  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:24.952197  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:26.952775  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:27.938777  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:29.939549  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:29.452518  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:31.452773  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:33.951896  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:32.439297  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:34.939290  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:35.952124  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:37.952888  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:36.939443  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:39.439507  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:40.452829  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:42.952723  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:41.939547  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:44.439685  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:45.452682  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:47.952663  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:46.439959  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:48.939551  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:49.952833  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:51.953215  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:51.439298  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:53.939194  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:54.452966  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:56.952662  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:56.439050  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:58.439250  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:59.452894  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:01.452993  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:03.952039  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:00.939359  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:03.439609  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:06.452224  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:08.951951  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:05.938661  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:07.939824  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:10.439218  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:10.952389  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:12.952480  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:12.939504  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:15.439451  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:15.452284  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:17.953019  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:17.939505  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:20.439836  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:20.451991  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:22.452912  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:22.938740  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:24.939630  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:24.952892  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:27.453024  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:27.439712  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:29.939146  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:29.953115  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:32.452095  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:32.439187  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:34.439528  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:34.453190  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:36.952740  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:36.939450  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:39.438925  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:39.453093  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:41.952831  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:41.439158  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:43.440112  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:44.452526  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:46.453025  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:48.952697  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:45.939050  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:47.939118  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:49.939338  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:51.452345  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:53.452917  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:52.439020  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:54.439255  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:55.952397  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:57.952650  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:56.939471  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:59.438970  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:00.451875  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:02.452533  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:01.439410  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:03.439747  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:04.952323  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:06.953080  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:05.939704  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:08.439258  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:09.452783  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:11.452916  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:13.952781  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:10.939241  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:13.439644  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:16.452431  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:18.952125  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:15.939011  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:18.439077  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:20.439255  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:20.953057  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:23.452290  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:22.439645  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:24.938780  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:25.953032  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:28.452613  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:26.939148  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:29.439156  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:30.952045  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:33.453012  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:31.439554  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:33.939040  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:35.952844  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:38.452043  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:36.439185  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:38.939474  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:40.452897  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:42.952703  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:41.439595  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:43.439860  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:44.952775  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:47.452279  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:45.938954  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:48.439103  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:49.452612  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:51.452653  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:53.952226  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:50.939266  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:52.939428  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:55.439627  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:55.449553  289404 pod_ready.go:81] duration metric: took 4m0.002631772s waiting for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" ...
	E0412 20:22:55.449598  289404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:22:55.449626  289404 pod_ready.go:38] duration metric: took 4m0.007144091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:22:55.449665  289404 kubeadm.go:605] restartCluster took 5m9.090565131s
	W0412 20:22:55.449859  289404 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:22:55.449901  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:22:56.788407  289404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.338480882s)
	I0412 20:22:56.788465  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:22:56.798571  289404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:22:56.806252  289404 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:22:56.806310  289404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:22:56.814094  289404 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:22:56.814147  289404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:22:57.205705  289404 out.go:203]   - Generating certificates and keys ...
	I0412 20:22:57.761892  289404 out.go:203]   - Booting up control plane ...
	I0412 20:22:57.939670  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:58.435823  293188 pod_ready.go:81] duration metric: took 4m0.002987778s waiting for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" ...
	E0412 20:22:58.435854  293188 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:22:58.435889  293188 pod_ready.go:38] duration metric: took 4m0.00926918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:22:58.435924  293188 kubeadm.go:605] restartCluster took 4m15.431156944s
	W0412 20:22:58.436101  293188 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:22:58.436140  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:23:00.308017  293188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.871849788s)
	I0412 20:23:00.308112  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:00.320139  293188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:23:00.327966  293188 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:23:00.328042  293188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:23:00.336326  293188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:23:00.336368  293188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:23:00.611970  293188 out.go:203]   - Generating certificates and keys ...
	I0412 20:23:01.168395  293188 out.go:203]   - Booting up control plane ...
	I0412 20:23:06.805594  289404 out.go:203]   - Configuring RBAC rules ...
	I0412 20:23:07.228571  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:23:07.228608  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:23:07.230875  289404 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:23:07.230960  289404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:23:07.235577  289404 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:23:07.235606  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:23:07.249805  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:23:07.476958  289404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:23:07.477058  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:07.477062  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=old-k8s-version-20220412200421-42006 minikube.k8s.io/updated_at=2022_04_12T20_23_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:07.617207  289404 ops.go:34] apiserver oom_adj: -16
	I0412 20:23:07.617401  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:08.195772  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:08.695638  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:09.196205  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.717153  293188 out.go:203]   - Configuring RBAC rules ...
	I0412 20:23:13.131342  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:23:13.131368  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:23:09.695425  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:10.195930  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:10.695954  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:11.195633  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:11.695826  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.195852  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.696130  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.195253  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.696165  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.196144  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.133726  293188 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:23:13.133819  293188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:23:13.137703  293188 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:23:13.137723  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:23:13.151266  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:23:13.779496  293188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:23:13.779592  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.779602  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=embed-certs-20220412200510-42006 minikube.k8s.io/updated_at=2022_04_12T20_23_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.844319  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.844349  293188 ops.go:34] apiserver oom_adj: -16
	I0412 20:23:14.416398  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.915875  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.416596  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.695253  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.195150  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.695415  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.195943  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.695835  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.196122  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.695700  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.195147  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.695398  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.195516  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.916799  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.416204  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.916796  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.416351  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.916642  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.416704  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.916121  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.415863  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.915946  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.416316  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.695272  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.195231  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.695839  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.196042  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.695436  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.195840  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.265152  289404 kubeadm.go:1020] duration metric: took 14.788147094s to wait for elevateKubeSystemPrivileges.
	I0412 20:23:22.265190  289404 kubeadm.go:393] StartCluster complete in 5m35.954640439s
	I0412 20:23:22.265216  289404 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:22.265344  289404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:23:22.266642  289404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:22.781755  289404 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220412200421-42006" rescaled to 1
	I0412 20:23:22.781838  289404 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:23:22.784342  289404 out.go:176] * Verifying Kubernetes components...
	I0412 20:23:22.784399  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:22.781888  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:23:22.781911  289404 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:23:22.784549  289404 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784574  289404 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.784587  289404 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:23:22.784588  289404 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784607  289404 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.782092  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:23:22.784626  289404 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784639  289404 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784643  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	W0412 20:23:22.784654  289404 addons.go:165] addon metrics-server should already be in state true
	I0412 20:23:22.784604  289404 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784699  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.784706  289404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.784622  289404 addons.go:165] addon dashboard should already be in state true
	I0412 20:23:22.784854  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.784998  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785175  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785177  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785289  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.834905  289404 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:23:22.834967  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:23:22.834839  289404 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:23:22.834976  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:23:22.835108  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.835109  289404 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:22.835146  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:23:22.835197  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.840482  289404 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.840512  289404 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:23:22.840547  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.841070  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.843111  289404 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:23:22.844712  289404 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:23:22.844786  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:23:22.844804  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:23:22.844869  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.883155  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.885010  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.885724  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.891532  289404 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:22.891561  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:23:22.891613  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.894872  289404 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:23:22.894917  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:23:22.941013  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:23.009112  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:23:23.009152  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:23:23.017044  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:23:23.017070  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:23:23.087289  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:23:23.087324  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:23:23.098845  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:23.100553  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:23:23.100586  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:23:23.180997  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:23:23.181029  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:23:23.199679  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:23:23.199710  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:23:23.200117  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:23.200143  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:23:23.216261  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:23.217306  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:23:23.217335  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:23:23.293044  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:23.296386  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:23:23.296416  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:23:23.381958  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:23:23.381988  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:23:23.400957  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:23:23.400986  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:23:23.404306  289404 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0412 20:23:23.485207  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:23.485240  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:23:23.501224  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:24.002810  289404 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:20.916222  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.416859  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.916573  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.415915  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.915956  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:23.416356  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:23.916733  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:24.415894  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:24.916772  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:25.416205  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:25.916674  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.416183  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.916867  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.975833  293188 kubeadm.go:1020] duration metric: took 13.196293095s to wait for elevateKubeSystemPrivileges.
	I0412 20:23:26.975874  293188 kubeadm.go:393] StartCluster complete in 4m44.018219722s
	I0412 20:23:26.975896  293188 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:26.976012  293188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:23:26.978211  293188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:27.500701  293188 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220412200510-42006" rescaled to 1
	I0412 20:23:27.500763  293188 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:23:27.503023  293188 out.go:176] * Verifying Kubernetes components...
	I0412 20:23:27.500837  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:23:27.503093  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:27.500871  293188 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:23:27.503173  293188 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.501024  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:23:27.503205  293188 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503209  293188 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503216  293188 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220412200510-42006"
	I0412 20:23:27.503190  293188 addons.go:65] Setting dashboard=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503256  293188 addons.go:153] Setting addon dashboard=true in "embed-certs-20220412200510-42006"
	I0412 20:23:27.503196  293188 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220412200510-42006"
	W0412 20:23:27.503276  293188 addons.go:165] addon dashboard should already be in state true
	W0412 20:23:27.503282  293188 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:23:27.503325  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.503325  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	W0412 20:23:27.503229  293188 addons.go:165] addon metrics-server should already be in state true
	I0412 20:23:27.503228  293188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220412200510-42006"
	I0412 20:23:27.503589  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.503804  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.503948  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.503973  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.504031  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.516146  293188 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:23:27.550686  293188 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:23:27.550784  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:23:27.550803  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:23:27.550859  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.556204  293188 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:23:27.556346  293188 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:27.556362  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:23:27.556409  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.560689  293188 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220412200510-42006"
	W0412 20:23:27.560742  293188 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:23:27.560776  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.561846  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.563827  293188 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:23:27.566302  293188 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:23:27.566378  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:23:27.566390  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:23:27.566448  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.595498  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:23:27.598031  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.600994  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.616248  293188 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:27.616282  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:23:27.616343  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.627801  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.656490  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.738871  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:27.787800  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:23:27.787831  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:23:27.791933  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:23:27.791958  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:23:27.797765  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:27.803394  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:23:27.803425  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:23:27.808640  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:23:27.808666  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:23:27.892163  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:27.892195  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:23:27.896562  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:23:27.896592  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:23:27.901548  293188 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0412 20:23:27.979768  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:27.980178  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:23:27.980200  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:23:28.001603  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:23:28.001637  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:23:28.086251  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:23:28.086331  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:23:28.102562  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:23:28.102631  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:23:28.179329  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:23:28.179360  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:23:28.201845  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:28.201898  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:23:28.292511  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:28.699642  293188 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220412200510-42006"
	I0412 20:23:24.323632  289404 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:23:24.323662  289404 addons.go:417] enableAddons completed in 1.541765473s
	I0412 20:23:24.904515  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:26.904888  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:28.905738  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:29.110155  293188 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0412 20:23:29.110184  293188 addons.go:417] enableAddons completed in 1.609328567s
	I0412 20:23:29.529851  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:31.405317  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:33.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:32.030061  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:34.030385  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:35.905005  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:37.905698  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:36.529738  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:39.029385  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:40.405606  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:42.904575  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:41.030287  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:43.030360  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:45.530065  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:44.904640  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:46.905176  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:47.530314  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:49.530546  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:49.405163  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:51.405698  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:53.904569  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:52.030189  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:54.529461  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:55.904874  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:58.404720  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:56.530043  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:59.029436  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:00.405668  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:02.905328  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:01.029972  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:03.530117  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:05.530287  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:05.404966  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:07.905041  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:08.029993  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:10.529708  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:10.405806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:12.905494  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:12.530227  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:15.030365  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:15.404546  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:17.405765  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:17.529883  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:20.030387  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:19.905315  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:22.405755  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:22.529841  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:25.029353  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:24.904584  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:27.405712  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:27.029951  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:29.529761  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:29.905343  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:31.905574  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:31.529947  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:34.029808  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:34.404690  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:36.405661  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:38.905176  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:36.030055  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:38.529175  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:40.529796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:41.405438  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:43.905150  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:43.030151  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:45.529652  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:45.905189  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:48.405669  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:47.530080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:50.029611  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:50.905152  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:53.404952  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:52.029988  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:54.529864  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:55.905884  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:58.404742  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:56.530329  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:59.030173  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:00.904714  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:02.905539  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:01.529575  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:03.529634  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:05.530147  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:05.404703  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:07.404929  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:08.030263  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:10.529544  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:09.904642  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:11.905009  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:12.529795  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:15.029585  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:14.405260  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:16.405707  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:18.904489  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:17.029751  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:19.529776  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:20.905048  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:22.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:22.030036  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:24.030201  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9833ae46466cc       6de166512aa22       30 seconds ago      Exited              kindnet-cni               7                   eac241d106cdd
	e86db06fb9ce1       3c53fa8541f95       12 minutes ago      Running             kube-proxy                0                   484376a2ef747
	51def5f5fb57c       25f8c7f3da61c       12 minutes ago      Running             etcd                      0                   fceaa872be874
	3c8657a1a5932       884d49d6d8c9f       12 minutes ago      Running             kube-scheduler            0                   ac91422e769ae
	1032ec9dc604b       3fc1d62d65872       12 minutes ago      Running             kube-apiserver            0                   c698f24911d58
	71af7fb31571e       b0c9e5e4dbb14       12 minutes ago      Running             kube-controller-manager   0                   32d426a8d8c0a
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:12:38 UTC, end at Tue 2022-04-12 20:25:28 UTC. --
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.123044477Z" level=warning msg="cleaning up after shim disconnected" id=9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f namespace=k8s.io
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.123061425Z" level=info msg="cleaning up dead shim"
	Apr 12 20:17:04 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:04.134824337Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2420\n"
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:05.135559073Z" level=info msg="RemoveContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\""
	Apr 12 20:17:05 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:17:05.141079170Z" level=info msg="RemoveContainer for \"07e5786acde4a835b00d8f15e8dc7966937a257ef07b018158203f654fd2748a\" returns successfully"
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.682416527Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.695536647Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\""
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.696157108Z" level=info msg="StartContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\""
	Apr 12 20:19:45 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:45.798908010Z" level=info msg="StartContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\" returns successfully"
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.114093174Z" level=info msg="shim disconnected" id=ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.114150634Z" level=warning msg="cleaning up after shim disconnected" id=ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb namespace=k8s.io
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.114159870Z" level=info msg="cleaning up dead shim"
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.125285686Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:19:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2758\n"
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.437004065Z" level=info msg="RemoveContainer for \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\""
	Apr 12 20:19:56 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:19:56.441491725Z" level=info msg="RemoveContainer for \"9e5744237dfde180210747e05e22a0b3a09bfe83b09e6e89b16a9b1bb214ee4f\" returns successfully"
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.682603030Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.696164115Z" level=info msg="CreateContainer within sandbox \"eac241d106cdd1f61526f1545df2f8aed3d703e05effb6e0695e11fe34b449c7\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63\""
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.696772681Z" level=info msg="StartContainer for \"9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63\""
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:24:58.885508636Z" level=info msg="StartContainer for \"9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63\" returns successfully"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.124126461Z" level=info msg="shim disconnected" id=9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.124200062Z" level=warning msg="cleaning up after shim disconnected" id=9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 namespace=k8s.io
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.124208947Z" level=info msg="cleaning up dead shim"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.134960427Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:25:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2863\n"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.961243494Z" level=info msg="RemoveContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\""
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 containerd[471]: time="2022-04-12T20:25:09.966172554Z" level=info msg="RemoveContainer for \"ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220412201228-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220412201228-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_13_10_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:13:06 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220412201228-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:25:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:23:26 +0000   Tue, 12 Apr 2022 20:13:03 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220412201228-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ef825856-4086-4c06-9629-95bede787d92
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220412201228-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-852v4                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      12m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220412201228-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220412201228-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nfsgp                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220412201228-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 12m                kube-proxy  
	  Normal  NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x4 over 12m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646] <==
	* {"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:13:03.196Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220412201228-42006 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:13:04.085Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.086Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:13:04.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:13:04.087Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-04-12T20:23:04.101Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":585}
	{"level":"info","ts":"2022-04-12T20:23:04.102Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":585,"took":"660.807µs"}
	
	* 
	* ==> kernel <==
	*  20:25:28 up  3:08,  0 users,  load average: 0.48, 0.85, 1.17
	Linux default-k8s-different-port-20220412201228-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c] <==
	* I0412 20:13:06.378891       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0412 20:13:06.380058       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0412 20:13:06.380365       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0412 20:13:06.380526       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0412 20:13:06.380660       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0412 20:13:06.389081       1 controller.go:611] quota admission added evaluator for: namespaces
	I0412 20:13:07.223083       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0412 20:13:07.223129       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0412 20:13:07.227737       1 storage_scheduling.go:93] created PriorityClass system-node-critical with value 2000001000
	I0412 20:13:07.231059       1 storage_scheduling.go:93] created PriorityClass system-cluster-critical with value 2000000000
	I0412 20:13:07.231090       1 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
	I0412 20:13:07.640851       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:13:07.682744       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0412 20:13:07.805024       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0412 20:13:07.813172       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0412 20:13:07.814261       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:13:07.818683       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:13:08.360879       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:13:09.411225       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:13:09.419785       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:13:09.431828       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:13:14.599758       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:13:21.818265       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:13:21.968747       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:13:22.481492       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda] <==
	* I0412 20:13:21.215625       1 shared_informer.go:247] Caches are synced for taint 
	I0412 20:13:21.215676       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0412 20:13:21.215694       1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
	W0412 20:13:21.215761       1 node_lifecycle_controller.go:1012] Missing timestamp for Node default-k8s-different-port-20220412201228-42006. Assuming now as a timestamp.
	I0412 20:13:21.215805       1 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0412 20:13:21.215859       1 event.go:294] "Event occurred" object="default-k8s-different-port-20220412201228-42006" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node default-k8s-different-port-20220412201228-42006 event: Registered Node default-k8s-different-port-20220412201228-42006 in Controller"
	I0412 20:13:21.229490       1 shared_informer.go:247] Caches are synced for deployment 
	I0412 20:13:21.315704       1 shared_informer.go:247] Caches are synced for ReplicationController 
	I0412 20:13:21.360412       1 shared_informer.go:247] Caches are synced for disruption 
	I0412 20:13:21.360445       1 disruption.go:371] Sending events to api server.
	I0412 20:13:21.368497       1 shared_informer.go:247] Caches are synced for HPA 
	I0412 20:13:21.385835       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:13:21.400192       1 shared_informer.go:247] Caches are synced for endpoint 
	I0412 20:13:21.411344       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0412 20:13:21.424347       1 shared_informer.go:247] Caches are synced for resource quota 
	I0412 20:13:21.821606       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:13:21.821636       1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0412 20:13:21.825308       1 event.go:294] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-852v4"
	I0412 20:13:21.825372       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nfsgp"
	I0412 20:13:21.839671       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0412 20:13:21.971282       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2"
	I0412 20:13:22.044641       1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1"
	I0412 20:13:22.121317       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-rmqrj"
	I0412 20:13:22.126350       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-c2gzm"
	I0412 20:13:22.145463       1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-rmqrj"
	
	* 
	* ==> kube-proxy [e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848] <==
	* I0412 20:13:22.455007       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0412 20:13:22.455073       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0412 20:13:22.455117       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:13:22.478285       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:13:22.478320       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:13:22.478326       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:13:22.478350       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:13:22.478788       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:13:22.479353       1 config.go:317] "Starting service config controller"
	I0412 20:13:22.479385       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:13:22.479423       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:13:22.479433       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:13:22.579611       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:13:22.579633       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd] <==
	* W0412 20:13:06.388989       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:13:06.389007       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:13:06.389014       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:13:06.389018       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:13:06.389730       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:13:06.389771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:13:07.206657       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:13:07.206707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0412 20:13:07.265873       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:13:07.265925       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:13:07.296201       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:13:07.296245       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:13:07.302602       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.302649       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.338917       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:13:07.338952       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:13:07.341982       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.342023       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.427305       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.427338       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:13:07.446555       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:13:07.446595       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:13:07.468839       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:13:07.468878       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0412 20:13:07.903442       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:12:38 UTC, end at Tue 2022-04-12 20:25:29 UTC. --
	Apr 12 20:24:24 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:24.952570    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:25 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:25.680490    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:25 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:25.680818    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:24:29 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:29.953457    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:34 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:34.955121    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:36 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:36.680374    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:36 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:36.680705    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:24:39 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:39.956431    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:44 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:44.957716    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:47 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:47.679522    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:47 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:47.679812    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:24:49 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:49.959315    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:54 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:54.960793    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:24:58 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:24:58.680293    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:24:59 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:24:59.962391    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:04 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:04.963546    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:25:09.960222    1290 scope.go:110] "RemoveContainer" containerID="ea18a467fdaf0983e900a92b9825a08b9d95c3efaf2135fb7aedb1eed7c0dcbb"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:25:09.960600    1290 scope.go:110] "RemoveContainer" containerID="9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:09.960988    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:25:09 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:09.964994    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:14 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:14.966231    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:19 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:19.967531    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:22 default-k8s-different-port-20220412201228-42006 kubelet[1290]: I0412 20:25:22.679628    1290 scope.go:110] "RemoveContainer" containerID="9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	Apr 12 20:25:22 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:22.680062    1290 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-852v4_kube-system(d4596d79-4aba-4c96-9fd5-c2c2b2010810)\"" pod="kube-system/kindnet-852v4" podUID=d4596d79-4aba-4c96-9fd5-c2c2b2010810
	Apr 12 20:25:24 default-k8s-different-port-20220412201228-42006 kubelet[1290]: E0412 20:25:24.968592    1290 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: busybox coredns-64897985d-c2gzm storage-provisioner
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/DeployApp]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe pod busybox coredns-64897985d-c2gzm storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod busybox coredns-64897985d-c2gzm storage-provisioner: exit status 1 (62.386192ms)

                                                
                                                
-- stdout --
	Name:         busybox
	Namespace:    default
	Priority:     0
	Node:         <none>
	Labels:       integration-test=busybox
	Annotations:  <none>
	Status:       Pending
	IP:           
	IPs:          <none>
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bcrdt (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-bcrdt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  49s (x8 over 8m4s)  default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-c2gzm" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod busybox coredns-64897985d-c2gzm storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/DeployApp (484.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (596.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20220412200421-42006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0
E0412 20:17:29.657357   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:17:45.720238   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:17:58.178203   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:18:02.669200   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-20220412200421-42006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0: exit status 80 (9m53.781223501s)

                                                
                                                
-- stdout --
	* [old-k8s-version-20220412200421-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Kubernetes 1.23.5 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.5
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node old-k8s-version-20220412200421-42006 in cluster old-k8s-version-20220412200421-42006
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-20220412200421-42006" ...
	* Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image k8s.gcr.io/echoserver:1.4
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 20:17:29.197380  289404 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:17:29.197556  289404 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:17:29.197567  289404 out.go:310] Setting ErrFile to fd 2...
	I0412 20:17:29.197574  289404 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:17:29.197697  289404 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:17:29.198001  289404 out.go:304] Setting JSON to false
	I0412 20:17:29.199693  289404 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10802,"bootTime":1649783847,"procs":690,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:17:29.199774  289404 start.go:125] virtualization: kvm guest
	I0412 20:17:29.202751  289404 out.go:176] * [old-k8s-version-20220412200421-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:17:29.204680  289404 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:17:29.202936  289404 notify.go:193] Checking for updates...
	I0412 20:17:29.206545  289404 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:17:29.208334  289404 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:17:29.210033  289404 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:17:29.211681  289404 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:17:29.212186  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:17:29.214567  289404 out.go:176] * Kubernetes 1.23.5 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.23.5
	I0412 20:17:29.214664  289404 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:17:29.257552  289404 docker.go:137] docker version: linux-20.10.14
	I0412 20:17:29.257664  289404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:17:29.358882  289404 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:17:29.289676597 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:17:29.359016  289404 docker.go:254] overlay module found
	I0412 20:17:29.361664  289404 out.go:176] * Using the docker driver based on existing profile
	I0412 20:17:29.361689  289404 start.go:284] selected driver: docker
	I0412 20:17:29.361695  289404 start.go:801] validating driver "docker" against &{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:29.361823  289404 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:17:29.361867  289404 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:17:29.361884  289404 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:17:29.363683  289404 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:17:29.364314  289404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:17:29.462530  289404 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:17:29.395046244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:17:29.462681  289404 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:17:29.462711  289404 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:17:29.464919  289404 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:17:29.465031  289404 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:17:29.465059  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:29.465068  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:29.465090  289404 start_flags.go:306] config:
	{Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:29.467276  289404 out.go:176] * Starting control plane node old-k8s-version-20220412200421-42006 in cluster old-k8s-version-20220412200421-42006
	I0412 20:17:29.467306  289404 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:17:29.468855  289404 out.go:176] * Pulling base image ...
	I0412 20:17:29.468883  289404 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:17:29.468914  289404 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:17:29.468919  289404 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0412 20:17:29.469037  289404 cache.go:57] Caching tarball of preloaded images
	I0412 20:17:29.469329  289404 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:17:29.469377  289404 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0412 20:17:29.469540  289404 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:17:29.515418  289404 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:17:29.515453  289404 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:17:29.515475  289404 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:17:29.515513  289404 start.go:352] acquiring machines lock for old-k8s-version-20220412200421-42006: {Name:mk51335e8aecb7357290fc27d80d48b525f2bff6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:17:29.515623  289404 start.go:356] acquired machines lock for "old-k8s-version-20220412200421-42006" in 87.128µs
	I0412 20:17:29.515653  289404 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:17:29.515665  289404 fix.go:55] fixHost starting: 
	I0412 20:17:29.515986  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:17:29.551090  289404 fix.go:103] recreateIfNeeded on old-k8s-version-20220412200421-42006: state=Stopped err=<nil>
	W0412 20:17:29.551126  289404 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:17:29.554026  289404 out.go:176] * Restarting existing docker container for "old-k8s-version-20220412200421-42006" ...
	I0412 20:17:29.554110  289404 cli_runner.go:164] Run: docker start old-k8s-version-20220412200421-42006
	I0412 20:17:29.948290  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:17:29.983637  289404 kic.go:416] container "old-k8s-version-20220412200421-42006" state is running.
	I0412 20:17:29.984024  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:30.018880  289404 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/config.json ...
	I0412 20:17:30.019121  289404 machine.go:88] provisioning docker machine ...
	I0412 20:17:30.019150  289404 ubuntu.go:169] provisioning hostname "old-k8s-version-20220412200421-42006"
	I0412 20:17:30.019209  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:30.056483  289404 main.go:134] libmachine: Using SSH client type: native
	I0412 20:17:30.056726  289404 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0412 20:17:30.056753  289404 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-20220412200421-42006 && echo "old-k8s-version-20220412200421-42006" | sudo tee /etc/hostname
	I0412 20:17:30.057485  289404 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50282->127.0.0.1:49427: read: connection reset by peer
	I0412 20:17:33.190100  289404 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-20220412200421-42006
	
	I0412 20:17:33.190188  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.225478  289404 main.go:134] libmachine: Using SSH client type: native
	I0412 20:17:33.225643  289404 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49427 <nil> <nil>}
	I0412 20:17:33.225665  289404 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-20220412200421-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-20220412200421-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-20220412200421-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:17:33.344395  289404 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:17:33.344433  289404 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:17:33.344501  289404 ubuntu.go:177] setting up certificates
	I0412 20:17:33.344513  289404 provision.go:83] configureAuth start
	I0412 20:17:33.344580  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:33.379393  289404 provision.go:138] copyHostCerts
	I0412 20:17:33.379467  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:17:33.379479  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:17:33.379543  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:17:33.379687  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:17:33.379705  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:17:33.379735  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:17:33.379802  289404 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:17:33.379810  289404 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:17:33.379832  289404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:17:33.379899  289404 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-20220412200421-42006 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-20220412200421-42006]
	I0412 20:17:33.613592  289404 provision.go:172] copyRemoteCerts
	I0412 20:17:33.613653  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:17:33.613694  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.650564  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:33.739873  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:17:33.758647  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0412 20:17:33.776884  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0412 20:17:33.794757  289404 provision.go:86] duration metric: configureAuth took 450.228367ms
	I0412 20:17:33.794785  289404 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:17:33.794975  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:17:33.794989  289404 machine.go:91] provisioned docker machine in 3.775852896s
	I0412 20:17:33.794997  289404 start.go:306] post-start starting for "old-k8s-version-20220412200421-42006" (driver="docker")
	I0412 20:17:33.795005  289404 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:17:33.795058  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:17:33.795106  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.828573  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:33.915698  289404 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:17:33.918851  289404 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:17:33.918873  289404 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:17:33.918893  289404 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:17:33.918900  289404 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:17:33.918911  289404 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:17:33.918969  289404 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:17:33.919030  289404 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:17:33.919114  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:17:33.926132  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:17:33.943473  289404 start.go:309] post-start completed in 148.459431ms
	I0412 20:17:33.943559  289404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:17:33.943611  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:33.979296  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.068745  289404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:17:34.072931  289404 fix.go:57] fixHost completed within 4.557261996s
	I0412 20:17:34.072964  289404 start.go:81] releasing machines lock for "old-k8s-version-20220412200421-42006", held for 4.557323673s
	I0412 20:17:34.073067  289404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-20220412200421-42006
	I0412 20:17:34.108785  289404 ssh_runner.go:195] Run: systemctl --version
	I0412 20:17:34.108829  289404 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:17:34.108852  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:34.108889  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:17:34.147630  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.147961  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:17:34.232522  289404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:17:34.259820  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:17:34.270552  289404 docker.go:183] disabling docker service ...
	I0412 20:17:34.270627  289404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:17:34.281466  289404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:17:34.291898  289404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:17:34.372403  289404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:17:34.452290  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:17:34.462444  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:17:34.475927  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMSIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:17:34.489911  289404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:17:34.497073  289404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:17:34.504299  289404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:17:34.584100  289404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:17:34.657988  289404 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:17:34.658055  289404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:17:34.661997  289404 start.go:462] Will wait 60s for crictl version
	I0412 20:17:34.662052  289404 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:17:34.688749  289404 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:17:34Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:17:45.736377  289404 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:17:45.764253  289404 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:17:45.764317  289404 ssh_runner.go:195] Run: containerd --version
	I0412 20:17:45.788116  289404 ssh_runner.go:195] Run: containerd --version
	I0412 20:17:45.813804  289404 out.go:176] * Preparing Kubernetes v1.16.0 on containerd 1.5.10 ...
	I0412 20:17:45.813902  289404 cli_runner.go:164] Run: docker network inspect old-k8s-version-20220412200421-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:17:45.850078  289404 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0412 20:17:45.853619  289404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:17:45.866312  289404 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:17:45.866409  289404 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 20:17:45.866484  289404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:17:45.891403  289404 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:17:45.891432  289404 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:17:45.891488  289404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:17:45.917465  289404 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:17:45.917491  289404 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:17:45.917536  289404 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:17:45.942935  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:45.942975  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:45.942995  289404 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:17:45.943016  289404 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-20220412200421-42006 NodeName:old-k8s-version-20220412200421-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroup
fs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:17:45.943146  289404 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-20220412200421-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-20220412200421-42006
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:17:45.943244  289404 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-20220412200421-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:17:45.943306  289404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0412 20:17:45.951356  289404 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:17:45.951429  289404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:17:45.959142  289404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (581 bytes)
	I0412 20:17:45.973290  289404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:17:45.987363  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0412 20:17:46.000890  289404 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:17:46.003861  289404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:17:46.013912  289404 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006 for IP: 192.168.67.2
	I0412 20:17:46.014036  289404 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:17:46.014072  289404 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:17:46.014139  289404 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/client.key
	I0412 20:17:46.014193  289404 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key.c7fa3a9e
	I0412 20:17:46.014227  289404 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key
	I0412 20:17:46.014315  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:17:46.014376  289404 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:17:46.014389  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:17:46.014416  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:17:46.014441  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:17:46.014463  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:17:46.014502  289404 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:17:46.015054  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:17:46.033250  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:17:46.051612  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:17:46.069438  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/old-k8s-version-20220412200421-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:17:46.087429  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:17:46.106400  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:17:46.126331  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:17:46.144926  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:17:46.163659  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:17:46.182405  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:17:46.201225  289404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:17:46.220095  289404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:17:46.233532  289404 ssh_runner.go:195] Run: openssl version
	I0412 20:17:46.238551  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:17:46.246882  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.250144  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.250198  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:17:46.255293  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:17:46.263296  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:17:46.271317  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.274644  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.274711  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:17:46.279819  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:17:46.287252  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:17:46.295001  289404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.298255  289404 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.298337  289404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:17:46.303307  289404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:17:46.310562  289404 kubeadm.go:391] StartCluster: {Name:old-k8s-version-20220412200421-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-20220412200421-42006 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:
[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:17:46.310692  289404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:17:46.310766  289404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:17:46.336676  289404 cri.go:87] found id: "1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1"
	I0412 20:17:46.336702  289404 cri.go:87] found id: "f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4"
	I0412 20:17:46.336709  289404 cri.go:87] found id: "d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b"
	I0412 20:17:46.336718  289404 cri.go:87] found id: "6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133"
	I0412 20:17:46.336726  289404 cri.go:87] found id: "e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db"
	I0412 20:17:46.336732  289404 cri.go:87] found id: "f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5"
	I0412 20:17:46.336737  289404 cri.go:87] found id: "e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1"
	I0412 20:17:46.336743  289404 cri.go:87] found id: ""
	I0412 20:17:46.336781  289404 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:17:46.350978  289404 cri.go:114] JSON = null
	W0412 20:17:46.351029  289404 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 7
	I0412 20:17:46.351077  289404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:17:46.359069  289404 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:17:46.359093  289404 kubeadm.go:601] restartCluster start
	I0412 20:17:46.359140  289404 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:17:46.366326  289404 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.367582  289404 kubeconfig.go:116] verify returned: extract IP: "old-k8s-version-20220412200421-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:17:46.368444  289404 kubeconfig.go:127] "old-k8s-version-20220412200421-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:17:46.369647  289404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:17:46.371957  289404 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:17:46.379643  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.379702  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.388397  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.588796  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.588874  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.598135  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.789302  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.789389  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.798209  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:46.989529  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:46.989625  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:46.998886  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.189239  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.189346  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.198862  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.389200  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.389286  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.398241  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.589313  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.589388  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.598198  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.789429  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.789512  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.798393  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:47.988615  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:47.988696  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:47.997702  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.188966  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.189070  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.198201  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.389562  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.389638  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.398668  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.588987  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.589084  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.598056  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.789219  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.789320  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.798195  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:48.989476  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:48.989556  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:48.998331  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.188797  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.188869  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.197864  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.389165  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.389236  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.398385  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.398411  289404 api_server.go:165] Checking apiserver status ...
	I0412 20:17:49.398456  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:17:49.408292  289404 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:17:49.408328  289404 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:17:49.408337  289404 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:17:49.408350  289404 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:17:49.408412  289404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:17:49.437804  289404 cri.go:87] found id: "1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1"
	I0412 20:17:49.437833  289404 cri.go:87] found id: "f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4"
	I0412 20:17:49.437841  289404 cri.go:87] found id: "d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b"
	I0412 20:17:49.437847  289404 cri.go:87] found id: "6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133"
	I0412 20:17:49.437853  289404 cri.go:87] found id: "e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db"
	I0412 20:17:49.437859  289404 cri.go:87] found id: "f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5"
	I0412 20:17:49.437864  289404 cri.go:87] found id: "e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1"
	I0412 20:17:49.437870  289404 cri.go:87] found id: ""
	I0412 20:17:49.437875  289404 cri.go:232] Stopping containers: [1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1 f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b 6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133 e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5 e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1]
	I0412 20:17:49.437925  289404 ssh_runner.go:195] Run: which crictl
	I0412 20:17:49.441008  289404 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 1bd2c2fccd8c547472f81fd84ffcc85248838b6f6bded8d4ba9f1c12dfb234c1 f03411fc533041f9ddcf991f18a51b6055896e203a19557ce49131bc9e7796b4 d1642a69585f2b5d8f43901e8a491cead56c56ef33038261d4145d7959922b9b 6cc69a6c92a9c7e418d30d94f1777cbd24a28b39c530a70bc05aa2bb9749c133 e47ba7bc7187c135dde6e6c116fd570d9338c6fa80edee55405758c75532e6db f29f2d4e263bc07cd05cd9c61510d49796a96af91aaf3c20135c8e50227408a5 e3d3ef830b73a6caad316df060603879e4acd4e12edca47bc38cbc8b4e8f67a1
	I0412 20:17:49.468746  289404 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:17:49.479225  289404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:17:49.486664  289404 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5747 Apr 12 20:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5783 Apr 12 20:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5935 Apr 12 20:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5735 Apr 12 20:04 /etc/kubernetes/scheduler.conf
	
	I0412 20:17:49.486737  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:17:49.493537  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:17:49.500633  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:17:49.507803  289404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:17:49.515027  289404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:17:49.522184  289404 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:17:49.522211  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:49.574062  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.154731  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.308499  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.384584  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:50.509940  289404 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:17:50.510014  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.020417  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.521045  289404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:17:51.588779  289404 api_server.go:71] duration metric: took 1.078840712s to wait for apiserver process to appear ...
	I0412 20:17:51.588815  289404 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:17:51.588829  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:51.589174  289404 api_server.go:256] stopped: https://192.168.67.2:8443/healthz: Get "https://192.168.67.2:8443/healthz": dial tcp 192.168.67.2:8443: connect: connection refused
	I0412 20:17:52.089936  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:55.386346  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:55.386393  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:55.589672  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:55.679945  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:55.680057  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:56.089538  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:56.094768  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:17:56.094805  289404 api_server.go:102] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:17:56.589444  289404 api_server.go:240] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0412 20:17:56.594755  289404 api_server.go:266] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0412 20:17:56.601922  289404 api_server.go:140] control plane version: v1.16.0
	I0412 20:17:56.601948  289404 api_server.go:130] duration metric: took 5.013125628s to wait for apiserver health ...
	I0412 20:17:56.601958  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:17:56.601965  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:17:56.604004  289404 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:17:56.604109  289404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:17:56.608013  289404 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:17:56.608039  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:17:56.621855  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:17:56.828475  289404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:17:56.835721  289404 system_pods.go:59] 8 kube-system pods found
	I0412 20:17:56.835755  289404 system_pods.go:61] "coredns-5644d7b6d9-z6lnj" [dac5b00a-e450-4c85-b1dd-54344be79d5a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0412 20:17:56.835762  289404 system_pods.go:61] "etcd-old-k8s-version-20220412200421-42006" [8305edc2-21b5-4258-ad07-8687f7c7f76f] Running
	I0412 20:17:56.835766  289404 system_pods.go:61] "kindnet-xxqjk" [306e6dc0-594c-4013-acc5-0fcbdf38806f] Running
	I0412 20:17:56.835772  289404 system_pods.go:61] "kube-apiserver-old-k8s-version-20220412200421-42006" [bf9e128c-6913-44d5-b0a7-1954fbcbf9bc] Running
	I0412 20:17:56.835776  289404 system_pods.go:61] "kube-controller-manager-old-k8s-version-20220412200421-42006" [7fac424e-5a0c-410f-8d27-6519915d6d2f] Running
	I0412 20:17:56.835780  289404 system_pods.go:61] "kube-proxy-nt4pk" [e0d683c7-40fd-43e1-ac82-a740e53a8513] Running
	I0412 20:17:56.835784  289404 system_pods.go:61] "kube-scheduler-old-k8s-version-20220412200421-42006" [8e70e26b-0e21-40ae-9d51-d1f712a8800c] Running
	I0412 20:17:56.835790  289404 system_pods.go:61] "storage-provisioner" [fc4dc4cd-6bf9-4b27-953d-a654ba5e298a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)
	I0412 20:17:56.835795  289404 system_pods.go:74] duration metric: took 7.294557ms to wait for pod list to return data ...
	I0412 20:17:56.835802  289404 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:17:56.838835  289404 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:17:56.838868  289404 node_conditions.go:123] node cpu capacity is 8
	I0412 20:17:56.838886  289404 node_conditions.go:105] duration metric: took 3.076017ms to run NodePressure ...
	I0412 20:17:56.838911  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:17:57.010809  289404 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:17:57.014381  289404 retry.go:31] will retry after 360.127272ms: kubelet not initialised
	I0412 20:17:57.378770  289404 retry.go:31] will retry after 436.71002ms: kubelet not initialised
	I0412 20:17:57.820671  289404 retry.go:31] will retry after 527.46423ms: kubelet not initialised
	I0412 20:17:58.352826  289404 retry.go:31] will retry after 780.162888ms: kubelet not initialised
	I0412 20:17:59.137522  289404 retry.go:31] will retry after 1.502072952s: kubelet not initialised
	I0412 20:18:00.644272  289404 retry.go:31] will retry after 1.073826528s: kubelet not initialised
	I0412 20:18:01.722982  289404 retry.go:31] will retry after 1.869541159s: kubelet not initialised
	I0412 20:18:03.598023  289404 retry.go:31] will retry after 2.549945972s: kubelet not initialised
	I0412 20:18:06.152243  289404 retry.go:31] will retry after 5.131623747s: kubelet not initialised
	I0412 20:18:11.289186  289404 retry.go:31] will retry after 9.757045979s: kubelet not initialised
	I0412 20:18:21.050530  289404 retry.go:31] will retry after 18.937774914s: kubelet not initialised
	I0412 20:18:39.992607  289404 retry.go:31] will retry after 15.44552029s: kubelet not initialised
	I0412 20:18:55.442437  289404 kubeadm.go:752] kubelet initialised
	I0412 20:18:55.442463  289404 kubeadm.go:753] duration metric: took 58.431626455s waiting for restarted kubelet to initialise ...
	I0412 20:18:55.442472  289404 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:18:55.446881  289404 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" ...
	I0412 20:18:57.452309  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:18:59.452702  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:01.951942  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:03.952202  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:05.952347  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:07.952479  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:10.452258  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:12.453023  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:14.453080  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:16.952944  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:19.452528  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:21.952621  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:23.952660  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:25.953037  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:28.452797  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:30.952242  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:32.952805  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:35.452316  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:37.951759  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:39.952865  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:41.952966  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:44.451931  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:46.452616  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:48.952981  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:51.452872  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:53.952148  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:56.452819  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:58.952504  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:01.452181  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:03.952960  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:05.953040  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:08.452051  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:10.452446  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:12.452585  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:14.952918  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:17.453178  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:19.953047  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:22.452913  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:24.952197  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:26.952775  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:29.452518  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:31.452773  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:33.951896  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:35.952124  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:37.952888  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:40.452829  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:42.952723  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:45.452682  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:47.952663  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:49.952833  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:51.953215  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:54.452966  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:56.952662  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:59.452894  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:01.452993  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:03.952039  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:06.452224  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:08.951951  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:10.952389  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:12.952480  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:15.452284  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:17.953019  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:20.451991  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:22.452912  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:24.952892  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:27.453024  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:29.953115  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:32.452095  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:34.453190  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:36.952740  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:39.453093  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:41.952831  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:44.452526  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:46.453025  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:48.952697  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:51.452345  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:53.452917  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:55.952397  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:57.952650  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:00.451875  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:02.452533  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:04.952323  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:06.953080  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:09.452783  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:11.452916  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:13.952781  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:16.452431  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:18.952125  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:20.953057  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:23.452290  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:25.953032  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:28.452613  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:30.952045  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:33.453012  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:35.952844  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:38.452043  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:40.452897  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:42.952703  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:44.952775  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:47.452279  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:49.452612  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:51.452653  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:53.952226  289404 pod_ready.go:102] pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:18:12 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:55.449553  289404 pod_ready.go:81] duration metric: took 4m0.002631772s waiting for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" ...
	E0412 20:22:55.449598  289404 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-5644d7b6d9-rdxgk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:22:55.449626  289404 pod_ready.go:38] duration metric: took 4m0.007144091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:22:55.449665  289404 kubeadm.go:605] restartCluster took 5m9.090565131s
	W0412 20:22:55.449859  289404 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:22:55.449901  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:22:56.788407  289404 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.338480882s)
	I0412 20:22:56.788465  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:22:56.798571  289404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:22:56.806252  289404 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:22:56.806310  289404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:22:56.814094  289404 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:22:56.814147  289404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:22:57.205705  289404 out.go:203]   - Generating certificates and keys ...
	I0412 20:22:57.761892  289404 out.go:203]   - Booting up control plane ...
	I0412 20:23:06.805594  289404 out.go:203]   - Configuring RBAC rules ...
	I0412 20:23:07.228571  289404 cni.go:93] Creating CNI manager for ""
	I0412 20:23:07.228608  289404 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:23:07.230875  289404 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:23:07.230960  289404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:23:07.235577  289404 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.16.0/kubectl ...
	I0412 20:23:07.235606  289404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:23:07.249805  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:23:07.476958  289404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:23:07.477058  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:07.477062  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=old-k8s-version-20220412200421-42006 minikube.k8s.io/updated_at=2022_04_12T20_23_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:07.617207  289404 ops.go:34] apiserver oom_adj: -16
	I0412 20:23:07.617401  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:08.195772  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:08.695638  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:09.196205  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:09.695425  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:10.195930  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:10.695954  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:11.195633  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:11.695826  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.195852  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:12.696130  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.195253  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.696165  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.196144  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.695253  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.195150  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.695415  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.195943  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.695835  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.196122  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.695700  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.195147  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.695398  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.195516  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.695272  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.195231  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.695839  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.196042  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.695436  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.195840  289404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.265152  289404 kubeadm.go:1020] duration metric: took 14.788147094s to wait for elevateKubeSystemPrivileges.
	I0412 20:23:22.265190  289404 kubeadm.go:393] StartCluster complete in 5m35.954640439s
	I0412 20:23:22.265216  289404 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:22.265344  289404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:23:22.266642  289404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:22.781755  289404 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20220412200421-42006" rescaled to 1
	I0412 20:23:22.781838  289404 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:23:22.784342  289404 out.go:176] * Verifying Kubernetes components...
	I0412 20:23:22.784399  289404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:22.781888  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:23:22.781911  289404 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:23:22.784549  289404 addons.go:65] Setting storage-provisioner=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784574  289404 addons.go:153] Setting addon storage-provisioner=true in "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.784587  289404 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:23:22.784588  289404 addons.go:65] Setting dashboard=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784607  289404 addons.go:153] Setting addon dashboard=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.782092  289404 config.go:178] Loaded profile config "old-k8s-version-20220412200421-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 20:23:22.784626  289404 addons.go:65] Setting metrics-server=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784639  289404 addons.go:153] Setting addon metrics-server=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784643  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	W0412 20:23:22.784654  289404 addons.go:165] addon metrics-server should already be in state true
	I0412 20:23:22.784604  289404 addons.go:65] Setting default-storageclass=true in profile "old-k8s-version-20220412200421-42006"
	I0412 20:23:22.784699  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.784706  289404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.784622  289404 addons.go:165] addon dashboard should already be in state true
	I0412 20:23:22.784854  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.784998  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785175  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785177  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.785289  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.834905  289404 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:23:22.834967  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:23:22.834839  289404 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:23:22.834976  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:23:22.835108  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.835109  289404 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:22.835146  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:23:22.835197  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.840482  289404 addons.go:153] Setting addon default-storageclass=true in "old-k8s-version-20220412200421-42006"
	W0412 20:23:22.840512  289404 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:23:22.840547  289404 host.go:66] Checking if "old-k8s-version-20220412200421-42006" exists ...
	I0412 20:23:22.841070  289404 cli_runner.go:164] Run: docker container inspect old-k8s-version-20220412200421-42006 --format={{.State.Status}}
	I0412 20:23:22.843111  289404 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:23:22.844712  289404 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:23:22.844786  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:23:22.844804  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:23:22.844869  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.883155  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.885010  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.885724  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:22.891532  289404 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:22.891561  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:23:22.891613  289404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20220412200421-42006
	I0412 20:23:22.894872  289404 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:23:22.894917  289404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:23:22.941013  289404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49427 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/old-k8s-version-20220412200421-42006/id_rsa Username:docker}
	I0412 20:23:23.009112  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:23:23.009152  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:23:23.017044  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:23:23.017070  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:23:23.087289  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:23:23.087324  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:23:23.098845  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:23.100553  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:23:23.100586  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:23:23.180997  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:23:23.181029  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:23:23.199679  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:23:23.199710  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:23:23.200117  289404 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:23.200143  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:23:23.216261  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:23.217306  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:23:23.217335  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:23:23.293044  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:23.296386  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:23:23.296416  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:23:23.381958  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:23:23.381988  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:23:23.400957  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:23:23.400986  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:23:23.404306  289404 start.go:777] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS
	I0412 20:23:23.485207  289404 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:23.485240  289404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:23:23.501224  289404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:24.002810  289404 addons.go:386] Verifying addon metrics-server=true in "old-k8s-version-20220412200421-42006"
	I0412 20:23:24.323632  289404 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:23:24.323662  289404 addons.go:417] enableAddons completed in 1.541765473s
	I0412 20:23:24.904515  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:26.904888  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:28.905738  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:31.405317  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:33.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:35.905005  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:37.905698  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:40.405606  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:42.904575  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:44.904640  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:46.905176  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:49.405163  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:51.405698  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:53.904569  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:55.904874  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:23:58.404720  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:00.405668  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:02.905328  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:05.404966  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:07.905041  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:10.405806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:12.905494  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:15.404546  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:17.405765  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:19.905315  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:22.405755  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:24.904584  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:27.405712  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:29.905343  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:31.905574  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:34.404690  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:36.405661  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:38.905176  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:41.405438  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:43.905150  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:45.905189  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:48.405669  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:50.905152  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:53.404952  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:55.905884  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:24:58.404742  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:00.904714  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:02.905539  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:05.404703  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:07.404929  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:09.904642  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:11.905009  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:14.405260  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:16.405707  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:18.904489  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:20.905048  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:22.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:24.905596  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:27.405512  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:29.905227  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:32.404896  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:34.905716  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:37.404880  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	* 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:22.914072  289404 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:243: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-20220412200421-42006 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220412200421-42006
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220412200421-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42",
	        "Created": "2022-04-12T20:04:30.270409412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:17:29.938914583Z",
	            "FinishedAt": "2022-04-12T20:17:28.601618224Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42-json.log",
	        "Name": "/old-k8s-version-20220412200421-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220412200421-42006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220412200421-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220412200421-42006",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220412200421-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220412200421-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53e31e93073e14b87893ecc02eec943a790f513e23d81081fb89673144f54f48",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/53e31e93073e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220412200421-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a5e4ff2bbf6e",
	                        "old-k8s-version-20220412200421-42006"
	                    ],
	                    "NetworkID": "0b96a6a249d72d5fff5d5b9db029edbfc6a07a56e8064108c65000591927cbc6",
	                    "EndpointID": "6781e09d44ca1ec39a13b240ba7487d8f08130968a667575f2ffa3cc79c9fd8d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25: (1.063727441s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                                   | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:47 UTC | Tue, 12 Apr 2022 20:13:48 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:28 UTC | Tue, 12 Apr 2022 20:25:29 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:29 UTC | Tue, 12 Apr 2022 20:25:30 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:30 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:40 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:25:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	72d9664fbba36       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   cfff760ba8d17
	d71fcd57fee8b       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   cfff760ba8d17
	35cfeab0e4e1d       c21b0c7400f98       4 minutes ago        Running             kube-proxy                0                   44b87fce4f1d0
	899651f5f598c       06a629a7e51cd       4 minutes ago        Running             kube-controller-manager   0                   82e6dfa275719
	43048450227de       b305571ca60a5       4 minutes ago        Running             kube-apiserver            0                   e8c2453c42536
	c74bd61d489ea       b2756210eeabf       4 minutes ago        Running             etcd                      0                   01918d7054f01
	eace48121b7e9       301ddc62b80b1       4 minutes ago        Running             kube-scheduler            0                   8f273a6589233
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:17:30 UTC, end at Tue 2022-04-12 20:27:24 UTC. --
	Apr 12 20:22:58 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:22:58.679941321Z" level=info msg="StartContainer for \"43048450227de67e9e1809cef2a38841367c12dd11d318da59981f0b718e3d27\" returns successfully"
	Apr 12 20:22:58 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:22:58.684107923Z" level=info msg="StartContainer for \"899651f5f598cfc5b9f581e04ee299c2209d93af0488aba1e94a3bc26897c31c\" returns successfully"
	Apr 12 20:23:21 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:21.930220115Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.292777404Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kindnet-r6mfw,Uid:eea12494-5d62-4fc1-a11b-fc3c48b53e19,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.299721700Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-ch8rr,Uid:e815bda4-d086-4f0a-9275-eac02937a25b,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.311615938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308 pid=3833
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.320927150Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44b87fce4f1d077218768eb777762768834a6f6e436c553b83bca806f5569b01 pid=3857
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.424113480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ch8rr,Uid:e815bda4-d086-4f0a-9275-eac02937a25b,Namespace:kube-system,Attempt:0,} returns sandbox id \"44b87fce4f1d077218768eb777762768834a6f6e436c553b83bca806f5569b01\""
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.427283518Z" level=info msg="CreateContainer within sandbox \"44b87fce4f1d077218768eb777762768834a6f6e436c553b83bca806f5569b01\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.446841848Z" level=info msg="CreateContainer within sandbox \"44b87fce4f1d077218768eb777762768834a6f6e436c553b83bca806f5569b01\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"35cfeab0e4e1d9964e124ff25be39891b8083f742e581e3929c3b8722b2f97fa\""
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.447406478Z" level=info msg="StartContainer for \"35cfeab0e4e1d9964e124ff25be39891b8083f742e581e3929c3b8722b2f97fa\""
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.543851226Z" level=info msg="StartContainer for \"35cfeab0e4e1d9964e124ff25be39891b8083f742e581e3929c3b8722b2f97fa\" returns successfully"
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.595027158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-r6mfw,Uid:eea12494-5d62-4fc1-a11b-fc3c48b53e19,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\""
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.598630371Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.611444889Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"d71fcd57fee8b6777852fae0b1b5597d1815543e0967b4f93d5602bab62ff3c0\""
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.612015085Z" level=info msg="StartContainer for \"d71fcd57fee8b6777852fae0b1b5597d1815543e0967b4f93d5602bab62ff3c0\""
	Apr 12 20:23:22 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:23:22.700352386Z" level=info msg="StartContainer for \"d71fcd57fee8b6777852fae0b1b5597d1815543e0967b4f93d5602bab62ff3c0\" returns successfully"
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.119071289Z" level=info msg="shim disconnected" id=d71fcd57fee8b6777852fae0b1b5597d1815543e0967b4f93d5602bab62ff3c0
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.119153071Z" level=warning msg="cleaning up after shim disconnected" id=d71fcd57fee8b6777852fae0b1b5597d1815543e0967b4f93d5602bab62ff3c0 namespace=k8s.io
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.119168897Z" level=info msg="cleaning up dead shim"
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.130231448Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:26:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4666\n"
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.139289978Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.153742475Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"72d9664fbba36142efc1f361b4633b51fbdca60ad76718b907afdf20587df1a5\""
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.154282281Z" level=info msg="StartContainer for \"72d9664fbba36142efc1f361b4633b51fbdca60ad76718b907afdf20587df1a5\""
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:26:03.298105067Z" level=info msg="StartContainer for \"72d9664fbba36142efc1f361b4633b51fbdca60ad76718b907afdf20587df1a5\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220412200421-42006
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220412200421-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=old-k8s-version-20220412200421-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_23_07_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:23:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:27:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:27:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:27:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:27:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20220412200421-42006
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                0b57e9d3-0bbc-4976-a928-dc02ca892e39
	 Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	 Kernel Version:             5.13.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220412200421-42006                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m6s
	  kube-system                kindnet-r6mfw                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m3s
	  kube-system                kube-apiserver-old-k8s-version-20220412200421-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                kube-controller-manager-old-k8s-version-20220412200421-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m56s
	  kube-system                kube-proxy-ch8rr                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m3s
	  kube-system                kube-scheduler-old-k8s-version-20220412200421-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                              Message
	  ----    ------                   ----                   ----                                              -------
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m2s                   kube-proxy, old-k8s-version-20220412200421-42006  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [c74bd61d489ea294c4524038815c535a5b71892b9f14baf6fda9b9aa6beb3722] <==
	* 2022-04-12 20:22:58.618896 I | etcdserver: initial cluster = old-k8s-version-20220412200421-42006=https://192.168.67.2:2380
	2022-04-12 20:22:58.622283 I | etcdserver: starting member 8688e899f7831fc7 in cluster 9d8fdeb88b6def78
	2022-04-12 20:22:58.622312 I | raft: 8688e899f7831fc7 became follower at term 0
	2022-04-12 20:22:58.622318 I | raft: newRaft 8688e899f7831fc7 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-04-12 20:22:58.622321 I | raft: 8688e899f7831fc7 became follower at term 1
	2022-04-12 20:22:58.681535 W | auth: simple token is not cryptographically signed
	2022-04-12 20:22:58.685815 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-04-12 20:22:58.686083 I | etcdserver: 8688e899f7831fc7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-04-12 20:22:58.686636 I | etcdserver/membership: added member 8688e899f7831fc7 [https://192.168.67.2:2380] to cluster 9d8fdeb88b6def78
	2022-04-12 20:22:58.688641 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-04-12 20:22:58.688786 I | embed: listening for metrics on http://192.168.67.2:2381
	2022-04-12 20:22:58.689067 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-04-12 20:22:59.322750 I | raft: 8688e899f7831fc7 is starting a new election at term 1
	2022-04-12 20:22:59.322790 I | raft: 8688e899f7831fc7 became candidate at term 2
	2022-04-12 20:22:59.322807 I | raft: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	2022-04-12 20:22:59.322821 I | raft: 8688e899f7831fc7 became leader at term 2
	2022-04-12 20:22:59.322828 I | raft: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2022-04-12 20:22:59.323129 I | etcdserver: setting up the initial cluster version to 3.3
	2022-04-12 20:22:59.324060 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-04-12 20:22:59.324130 I | etcdserver: published {Name:old-k8s-version-20220412200421-42006 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2022-04-12 20:22:59.324144 I | etcdserver/api: enabled capabilities for version 3.3
	2022-04-12 20:22:59.324159 I | embed: ready to serve client requests
	2022-04-12 20:22:59.324240 I | embed: ready to serve client requests
	2022-04-12 20:22:59.327070 I | embed: serving client requests on 127.0.0.1:2379
	2022-04-12 20:22:59.329081 I | embed: serving client requests on 192.168.67.2:2379
	
	* 
	* ==> kernel <==
	*  20:27:24 up  3:09,  0 users,  load average: 0.94, 0.89, 1.15
	Linux old-k8s-version-20220412200421-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [43048450227de67e9e1809cef2a38841367c12dd11d318da59981f0b718e3d27] <==
	* I0412 20:23:05.071632       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0412 20:23:05.351474       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0412 20:23:05.703953       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0412 20:23:05.704766       1 controller.go:606] quota admission added evaluator for: endpoints
	I0412 20:23:06.607358       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0412 20:23:06.848596       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0412 20:23:07.211289       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0412 20:23:21.929276       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:23:21.945589       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I0412 20:23:21.963735       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0412 20:23:25.697587       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:23:25.697639       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:23:25.697692       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:23:25.697706       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0412 20:24:25.697881       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:24:25.697951       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:24:25.697986       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:24:25.698004       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0412 20:26:25.698222       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:26:25.698285       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:26:25.698355       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:26:25.698375       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [899651f5f598cfc5b9f581e04ee299c2209d93af0488aba1e94a3bc26897c31c] <==
	* I0412 20:23:24.196714       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f1a80a28-f18b-42c5-a897-4cacd2997672", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0412 20:23:24.198465       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:24.198499       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"97388b8f-abd5-430b-a4fc-9d5b6e125776", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0412 20:23:24.203773       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-6b84985989" failed with pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:24.203877       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f1a80a28-f18b-42c5-a897-4cacd2997672", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-6b84985989-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0412 20:23:24.204497       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-6fb5469cf5" failed with pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:24.204504       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"97388b8f-abd5-430b-a4fc-9d5b6e125776", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-6fb5469cf5-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:24.888293       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-6f89b5864b", UID:"2d14966f-d2ce-4364-a2f1-dcb294ac576b", APIVersion:"apps/v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-6f89b5864b-g7z8d
	I0412 20:23:25.215088       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-6b84985989", UID:"f1a80a28-f18b-42c5-a897-4cacd2997672", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-6b84985989-g99k4
	I0412 20:23:25.217697       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-6fb5469cf5", UID:"97388b8f-abd5-430b-a4fc-9d5b6e125776", APIVersion:"apps/v1", ResourceVersion:"437", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-6fb5469cf5-k6tnl
	E0412 20:23:52.464113       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:23:54.316939       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:24:22.715785       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:24:26.318603       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:24:52.967386       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:24:58.320259       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:25:23.218924       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:25:30.321587       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:25:53.470651       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:26:02.323361       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:26:23.722263       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:26:34.325190       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:26:53.973867       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:27:06.326963       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:27:24.225668       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [35cfeab0e4e1d9964e124ff25be39891b8083f742e581e3929c3b8722b2f97fa] <==
	* W0412 20:23:22.585721       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0412 20:23:22.594854       1 node.go:135] Successfully retrieved node IP: 192.168.67.2
	I0412 20:23:22.594901       1 server_others.go:149] Using iptables Proxier.
	I0412 20:23:22.595265       1 server.go:529] Version: v1.16.0
	I0412 20:23:22.595938       1 config.go:313] Starting service config controller
	I0412 20:23:22.595977       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0412 20:23:22.596105       1 config.go:131] Starting endpoints config controller
	I0412 20:23:22.596136       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0412 20:23:22.696164       1 shared_informer.go:204] Caches are synced for service config 
	I0412 20:23:22.696339       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [eace48121b7e9cb97a533cf95c603ce5868bf94d79d9ae87d2256ed29a48a90e] <==
	* E0412 20:23:02.581496       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:23:02.581570       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:02.581962       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:02.582000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:23:02.583995       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:23:02.584131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:23:02.584876       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:23:02.584965       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:23:02.585029       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:02.585381       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:23:02.587877       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:03.583153       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:23:03.584523       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:03.585485       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:03.586519       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:23:03.587580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:23:03.589455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:23:03.590444       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:23:03.591706       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:23:03.592824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:03.594549       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:23:03.595674       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:21.989917       1 factory.go:585] pod is already present in the activeQ
	E0412 20:23:24.891555       1 factory.go:585] pod is already present in the activeQ
	E0412 20:23:25.227638       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:17:30 UTC, end at Tue 2022-04-12 20:27:24 UTC. --
	Apr 12 20:25:23 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:23.006858    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:28 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:28.007680    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:33 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:33.008552    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:38 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:38.009326    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:43 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:43.010164    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:48 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:48.011036    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:53 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:53.011915    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:25:58 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:25:58.012720    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:03 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:03.013456    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:08 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:08.014212    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:13 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:13.015007    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:18 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:18.015777    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:23 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:23.016641    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:28 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:28.017917    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:33 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:33.018611    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:38 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:38.019407    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:43 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:43.020342    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:48 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:48.021111    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:53 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:53.021798    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:26:58 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:26:58.022553    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:27:03 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:27:03.023392    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:27:08 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:27:08.024185    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:27:13 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:27:13.024904    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:27:18 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:27:18.025684    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:27:23 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:27:23.026598    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl: exit status 1 (68.580758ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-jhvqs" not found
	Error from server (NotFound): pods "metrics-server-6f89b5864b-g7z8d" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-g99k4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-6fb5469cf5-k6tnl" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (596.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (543.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20220412200510-42006 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5
E0412 20:18:26.063865   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:18:31.519029   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:18:33.412198   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:18:51.578347   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:20:31.558320   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:20:54.807773   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:20:58.260053   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:21:07.734618   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:21:35.418889   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:22:10.367002   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:22:57.563320   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:22:58.177859   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:23:02.669244   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:23:14.514762   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:23:31.519744   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:24:54.565179   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p embed-certs-20220412200510-42006 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5: exit status 80 (9m1.738501282s)

                                                
                                                
-- stdout --
	* [embed-certs-20220412200510-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node embed-certs-20220412200510-42006 in cluster embed-certs-20220412200510-42006
	* Pulling base image ...
	* Restarting existing docker container for "embed-certs-20220412200510-42006" ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image k8s.gcr.io/echoserver:1.4
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 20:18:25.862605  293188 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:18:25.862730  293188 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:18:25.862740  293188 out.go:310] Setting ErrFile to fd 2...
	I0412 20:18:25.862745  293188 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:18:25.862852  293188 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:18:25.863116  293188 out.go:304] Setting JSON to false
	I0412 20:18:25.864718  293188 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":10859,"bootTime":1649783847,"procs":737,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:18:25.864796  293188 start.go:125] virtualization: kvm guest
	I0412 20:18:25.867632  293188 out.go:176] * [embed-certs-20220412200510-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:18:25.869167  293188 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:18:25.867850  293188 notify.go:193] Checking for updates...
	I0412 20:18:25.870679  293188 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:18:25.872520  293188 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:18:25.874113  293188 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:18:25.875680  293188 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:18:25.876226  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:18:25.876728  293188 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:18:25.920777  293188 docker.go:137] docker version: linux-20.10.14
	I0412 20:18:25.920901  293188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:18:26.018991  293188 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:18:25.951512717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:18:26.019089  293188 docker.go:254] overlay module found
	I0412 20:18:26.021901  293188 out.go:176] * Using the docker driver based on existing profile
	I0412 20:18:26.021929  293188 start.go:284] selected driver: docker
	I0412 20:18:26.021936  293188 start.go:801] validating driver "docker" against &{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> E
xposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:26.022056  293188 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:18:26.022097  293188 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:18:26.022122  293188 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:18:26.023822  293188 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:18:26.024448  293188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:18:26.122834  293188 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:18:26.056644105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:18:26.123002  293188 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:18:26.123035  293188 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:18:26.125282  293188 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:18:26.125414  293188 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:18:26.125443  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:26.125451  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:26.125472  293188 start_flags.go:306] config:
	{Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:26.127545  293188 out.go:176] * Starting control plane node embed-certs-20220412200510-42006 in cluster embed-certs-20220412200510-42006
	I0412 20:18:26.127593  293188 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:18:26.129188  293188 out.go:176] * Pulling base image ...
	I0412 20:18:26.129236  293188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:18:26.129274  293188 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:18:26.129311  293188 cache.go:57] Caching tarball of preloaded images
	I0412 20:18:26.129330  293188 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:18:26.129609  293188 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:18:26.129636  293188 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:18:26.129802  293188 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:18:26.175577  293188 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:18:26.175639  293188 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:18:26.175656  293188 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:18:26.175717  293188 start.go:352] acquiring machines lock for embed-certs-20220412200510-42006: {Name:mk64f255895db788ec660fe05e5b2f5e43e4987c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:18:26.175846  293188 start.go:356] acquired machines lock for "embed-certs-20220412200510-42006" in 99.006µs
	I0412 20:18:26.175875  293188 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:18:26.175886  293188 fix.go:55] fixHost starting: 
	I0412 20:18:26.176250  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:18:26.210832  293188 fix.go:103] recreateIfNeeded on embed-certs-20220412200510-42006: state=Stopped err=<nil>
	W0412 20:18:26.210874  293188 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:18:26.213643  293188 out.go:176] * Restarting existing docker container for "embed-certs-20220412200510-42006" ...
	I0412 20:18:26.213726  293188 cli_runner.go:164] Run: docker start embed-certs-20220412200510-42006
	I0412 20:18:26.621467  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:18:26.658142  293188 kic.go:416] container "embed-certs-20220412200510-42006" state is running.
	I0412 20:18:26.658585  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:26.695091  293188 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/config.json ...
	I0412 20:18:26.695340  293188 machine.go:88] provisioning docker machine ...
	I0412 20:18:26.695369  293188 ubuntu.go:169] provisioning hostname "embed-certs-20220412200510-42006"
	I0412 20:18:26.695431  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:26.732045  293188 main.go:134] libmachine: Using SSH client type: native
	I0412 20:18:26.732417  293188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0412 20:18:26.732462  293188 main.go:134] libmachine: About to run SSH command:
	sudo hostname embed-certs-20220412200510-42006 && echo "embed-certs-20220412200510-42006" | sudo tee /etc/hostname
	I0412 20:18:26.733264  293188 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34530->127.0.0.1:49432: read: connection reset by peer
	I0412 20:18:29.866005  293188 main.go:134] libmachine: SSH cmd err, output: <nil>: embed-certs-20220412200510-42006
	
	I0412 20:18:29.866093  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:29.900758  293188 main.go:134] libmachine: Using SSH client type: native
	I0412 20:18:29.900906  293188 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49432 <nil> <nil>}
	I0412 20:18:29.900927  293188 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20220412200510-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20220412200510-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20220412200510-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:18:30.024252  293188 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:18:30.024282  293188 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:18:30.024338  293188 ubuntu.go:177] setting up certificates
	I0412 20:18:30.024354  293188 provision.go:83] configureAuth start
	I0412 20:18:30.024412  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:30.058758  293188 provision.go:138] copyHostCerts
	I0412 20:18:30.058845  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:18:30.058861  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:18:30.058929  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:18:30.059051  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:18:30.059069  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:18:30.059099  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:18:30.059165  293188 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:18:30.059178  293188 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:18:30.059201  293188 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:18:30.059267  293188 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20220412200510-42006 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-20220412200510-42006]
	I0412 20:18:30.297705  293188 provision.go:172] copyRemoteCerts
	I0412 20:18:30.297778  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:18:30.297829  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.332442  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.420873  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:18:30.439067  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:18:30.457093  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1269 bytes)
	I0412 20:18:30.475014  293188 provision.go:86] duration metric: configureAuth took 450.644265ms
	I0412 20:18:30.475046  293188 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:18:30.475255  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:18:30.475269  293188 machine.go:91] provisioned docker machine in 3.779914385s
	I0412 20:18:30.475278  293188 start.go:306] post-start starting for "embed-certs-20220412200510-42006" (driver="docker")
	I0412 20:18:30.475291  293188 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:18:30.475347  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:18:30.475392  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.510455  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.600261  293188 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:18:30.603987  293188 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:18:30.604028  293188 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:18:30.604042  293188 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:18:30.604051  293188 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:18:30.604086  293188 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:18:30.604150  293188 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:18:30.604213  293188 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:18:30.604287  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:18:30.611676  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:18:30.630124  293188 start.go:309] post-start completed in 154.824821ms
	I0412 20:18:30.630194  293188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:18:30.630238  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.664427  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.748775  293188 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:18:30.752838  293188 fix.go:57] fixHost completed within 4.576944958s
	I0412 20:18:30.752868  293188 start.go:81] releasing machines lock for "embed-certs-20220412200510-42006", held for 4.577006104s
	I0412 20:18:30.752946  293188 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-20220412200510-42006
	I0412 20:18:30.786779  293188 ssh_runner.go:195] Run: systemctl --version
	I0412 20:18:30.786833  293188 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:18:30.786839  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.786895  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:18:30.823951  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.826217  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:18:30.926862  293188 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:18:30.939004  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:18:30.949472  293188 docker.go:183] disabling docker service ...
	I0412 20:18:30.949536  293188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:18:30.959877  293188 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:18:30.969654  293188 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:18:31.049568  293188 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:18:31.130181  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:18:31.139692  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:18:31.153074  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:18:31.166937  293188 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:18:31.173897  293188 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:18:31.180575  293188 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:18:31.251378  293188 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:18:31.325131  293188 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:18:31.325208  293188 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:18:31.329163  293188 start.go:462] Will wait 60s for crictl version
	I0412 20:18:31.329215  293188 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:18:31.354553  293188 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:18:31Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:18:42.402319  293188 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:18:42.427518  293188 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:18:42.427582  293188 ssh_runner.go:195] Run: containerd --version
	I0412 20:18:42.448343  293188 ssh_runner.go:195] Run: containerd --version
	I0412 20:18:42.472811  293188 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:18:42.472913  293188 cli_runner.go:164] Run: docker network inspect embed-certs-20220412200510-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:18:42.506510  293188 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0412 20:18:42.510028  293188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:18:42.522298  293188 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:18:42.522410  293188 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:18:42.522486  293188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:18:42.548260  293188 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:18:42.548288  293188 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:18:42.548350  293188 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:18:42.573330  293188 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:18:42.573355  293188 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:18:42.573400  293188 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:18:42.597742  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:42.597769  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:42.597782  293188 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:18:42.597800  293188 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20220412200510-42006 NodeName:embed-certs-20220412200510-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs Clien
tCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:18:42.597944  293188 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20220412200510-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:18:42.598030  293188 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20220412200510-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0412 20:18:42.598081  293188 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:18:42.605494  293188 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:18:42.605604  293188 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:18:42.612680  293188 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (577 bytes)
	I0412 20:18:42.626260  293188 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:18:42.639600  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2061 bytes)
	I0412 20:18:42.653027  293188 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:18:42.656044  293188 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:18:42.665264  293188 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006 for IP: 192.168.58.2
	I0412 20:18:42.665394  293188 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:18:42.665433  293188 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:18:42.665515  293188 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/client.key
	I0412 20:18:42.665564  293188 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key.cee25041
	I0412 20:18:42.665596  293188 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key
	I0412 20:18:42.665720  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:18:42.665758  293188 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:18:42.665772  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:18:42.665799  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:18:42.665824  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:18:42.665847  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:18:42.665883  293188 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:18:42.666420  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:18:42.684961  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:18:42.703505  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:18:42.722170  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/embed-certs-20220412200510-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0412 20:18:42.740728  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:18:42.759411  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:18:42.777909  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:18:42.795814  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:18:42.813492  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:18:42.831827  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:18:42.850182  293188 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:18:42.867975  293188 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:18:42.882318  293188 ssh_runner.go:195] Run: openssl version
	I0412 20:18:42.887540  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:18:42.895898  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.899141  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.899202  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:18:42.904418  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:18:42.911721  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:18:42.919627  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.922828  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.922889  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:18:42.928163  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:18:42.935357  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:18:42.942820  293188 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.945929  293188 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.945976  293188 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:18:42.950738  293188 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:18:42.957667  293188 kubeadm.go:391] StartCluster: {Name:embed-certs-20220412200510-42006 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:embed-certs-20220412200510-42006 Namespace:default APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Liste
nAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:18:42.957775  293188 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:18:42.957819  293188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:18:42.983592  293188 cri.go:87] found id: "45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	I0412 20:18:42.983618  293188 cri.go:87] found id: "99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9"
	I0412 20:18:42.983624  293188 cri.go:87] found id: "1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d"
	I0412 20:18:42.983631  293188 cri.go:87] found id: "3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930"
	I0412 20:18:42.983636  293188 cri.go:87] found id: "3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed"
	I0412 20:18:42.983642  293188 cri.go:87] found id: "e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0"
	I0412 20:18:42.983648  293188 cri.go:87] found id: ""
	I0412 20:18:42.983682  293188 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:18:42.997448  293188 cri.go:114] JSON = null
	W0412 20:18:42.997504  293188 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:18:42.997555  293188 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:18:43.004738  293188 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:18:43.004762  293188 kubeadm.go:601] restartCluster start
	I0412 20:18:43.004809  293188 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:18:43.012338  293188 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.013058  293188 kubeconfig.go:116] verify returned: extract IP: "embed-certs-20220412200510-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:18:43.013376  293188 kubeconfig.go:127] "embed-certs-20220412200510-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:18:43.013929  293188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:18:43.015377  293188 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:18:43.022831  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.022901  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.032323  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.232731  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.232839  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.241744  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.433096  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.433175  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.442230  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.632561  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.632636  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.641527  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:43.832747  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:43.832833  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:43.841699  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.032995  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.033117  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.042221  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.232605  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.232679  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.241596  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.432814  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.432898  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.441681  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.633020  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.633115  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.642100  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:44.833416  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:44.833505  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:44.843045  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.033244  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.033372  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.042455  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.232743  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.232829  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.241922  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.433151  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.433234  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.442285  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.632437  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.632580  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.641663  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:45.833174  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:45.833254  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:45.842437  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.032944  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:46.033024  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:46.042136  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.042169  293188 api_server.go:165] Checking apiserver status ...
	I0412 20:18:46.042209  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:18:46.050391  293188 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.050420  293188 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:18:46.050427  293188 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:18:46.050443  293188 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:18:46.050494  293188 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:18:46.077200  293188 cri.go:87] found id: "45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae"
	I0412 20:18:46.077226  293188 cri.go:87] found id: "99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9"
	I0412 20:18:46.077240  293188 cri.go:87] found id: "1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d"
	I0412 20:18:46.077247  293188 cri.go:87] found id: "3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930"
	I0412 20:18:46.077255  293188 cri.go:87] found id: "3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed"
	I0412 20:18:46.077286  293188 cri.go:87] found id: "e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0"
	I0412 20:18:46.077300  293188 cri.go:87] found id: ""
	I0412 20:18:46.077307  293188 cri.go:232] Stopping containers: [45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae 99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9 1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d 3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930 3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0]
	I0412 20:18:46.077363  293188 ssh_runner.go:195] Run: which crictl
	I0412 20:18:46.080533  293188 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 45fabe7cb7395e0c30a4393ad9200abaf7881d0466d5ffdcde46faf8e637daae 99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9 1549b6cbd198c45abd7224f0fbd5ce0d6713b1d4c5ccbad32a34ac2b6a109d2d 3ecbbe2de190c9c1e2f575bb88b355a7eaf09932cb16fd1a6cef069051de9930 3bb4ed6826e041fff709fbb31d1f2446a15f08bcc0fa07eb151243acd0226bed e67989f440e4332c6ff00c54e8fa657032c034f05a0edc75576cb16ffd4794b0
	I0412 20:18:46.108221  293188 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:18:46.118944  293188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:18:46.126295  293188 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Apr 12 20:05 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2067 Apr 12 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr 12 20:05 /etc/kubernetes/scheduler.conf
	
	I0412 20:18:46.126355  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0412 20:18:46.133414  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0412 20:18:46.140348  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0412 20:18:46.147289  293188 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.147353  293188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:18:46.153983  293188 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0412 20:18:46.160779  293188 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:18:46.160847  293188 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:18:46.167729  293188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:18:46.174673  293188 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:18:46.174697  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.219984  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.780655  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.916175  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:46.967869  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:47.020948  293188 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:18:47.021032  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:47.530989  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:48.030856  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:48.530765  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:49.030619  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:49.530473  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:50.030687  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:50.530420  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:51.031271  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:51.530751  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:52.030588  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:52.530431  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:53.031324  293188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:18:53.091818  293188 api_server.go:71] duration metric: took 6.07087219s to wait for apiserver process to appear ...
	I0412 20:18:53.091857  293188 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:18:53.091871  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:53.092280  293188 api_server.go:256] stopped: https://192.168.58.2:8443/healthz: Get "https://192.168.58.2:8443/healthz": dial tcp 192.168.58.2:8443: connect: connection refused
	I0412 20:18:53.593049  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:55.985909  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:18:55.985946  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:18:56.093093  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:56.106818  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:56.106855  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:56.593283  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:56.598524  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:56.598552  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:57.093125  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:57.098065  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:18:57.098143  293188 api_server.go:102] status: https://192.168.58.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:18:57.593444  293188 api_server.go:240] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0412 20:18:57.598330  293188 api_server.go:266] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0412 20:18:57.604742  293188 api_server.go:140] control plane version: v1.23.5
	I0412 20:18:57.604771  293188 api_server.go:130] duration metric: took 4.512906341s to wait for apiserver health ...
	I0412 20:18:57.604785  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:18:57.604793  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:18:57.607772  293188 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:18:57.607862  293188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:18:57.612047  293188 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:18:57.612106  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:18:57.625606  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:18:58.259688  293188 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:18:58.267983  293188 system_pods.go:59] 9 kube-system pods found
	I0412 20:18:58.268016  293188 system_pods.go:61] "coredns-64897985d-zvglg" [d5fab6b5-c460-460f-8cb9-6a8df3a0a493] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268026  293188 system_pods.go:61] "etcd-embed-certs-20220412200510-42006" [f0b1b85a-9a7c-49a3-9c3a-f120f8274f99] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0412 20:18:58.268033  293188 system_pods.go:61] "kindnet-7f7sj" [059bb69b-b8de-4f71-85b1-8d7391491598] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:18:58.268040  293188 system_pods.go:61] "kube-apiserver-embed-certs-20220412200510-42006" [6cfeb71b-0d01-4c67-8a26-edbc213c684f] Running
	I0412 20:18:58.268048  293188 system_pods.go:61] "kube-controller-manager-embed-certs-20220412200510-42006" [726d3fb3-6d83-4325-9328-a407b3bffd34] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:18:58.268055  293188 system_pods.go:61] "kube-proxy-6nznr" [aa45eb74-fde3-453a-82ad-e29ae4116d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:18:58.268060  293188 system_pods.go:61] "kube-scheduler-embed-certs-20220412200510-42006" [c03b607f-b4f9-4ff6-8d07-8890c53a7dd6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:18:58.268085  293188 system_pods.go:61] "metrics-server-b955d9d8-6cvmp" [cfc4546c-e7eb-4626-af34-9d7382032070] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268094  293188 system_pods.go:61] "storage-provisioner" [c17111bc-be71-4c72-9d44-0de354dc03e1] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:18:58.268110  293188 system_pods.go:74] duration metric: took 8.401782ms to wait for pod list to return data ...
	I0412 20:18:58.268120  293188 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:18:58.270949  293188 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:18:58.270997  293188 node_conditions.go:123] node cpu capacity is 8
	I0412 20:18:58.271013  293188 node_conditions.go:105] duration metric: took 2.882717ms to run NodePressure ...
	I0412 20:18:58.271045  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:18:58.422028  293188 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:18:58.426575  293188 kubeadm.go:752] kubelet initialised
	I0412 20:18:58.426601  293188 kubeadm.go:753] duration metric: took 4.547593ms waiting for restarted kubelet to initialise ...
	I0412 20:18:58.426610  293188 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:18:58.432786  293188 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" ...
	I0412 20:19:00.439498  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:02.939601  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:05.439254  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:07.439551  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:09.939856  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:12.439364  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:14.939042  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:16.939458  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:19.439708  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:21.938672  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:23.939041  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:26.439455  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:28.939098  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:30.939386  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:33.439558  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:35.939628  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:38.439636  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:40.939568  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:43.439290  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:45.938661  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:47.939519  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:50.439960  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:52.939629  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:54.941643  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:57.438786  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:19:59.439809  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:01.939098  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:03.939221  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:05.939416  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:07.939575  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:10.438960  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:12.439256  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:14.439328  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:16.939000  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:19.438936  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:21.439374  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:23.439718  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:25.440229  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:27.938777  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:29.939549  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:32.439297  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:34.939290  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:36.939443  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:39.439507  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:41.939547  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:44.439685  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:46.439959  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:48.939551  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:51.439298  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:53.939194  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:56.439050  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:20:58.439250  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:00.939359  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:03.439609  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:05.938661  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:07.939824  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:10.439218  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:12.939504  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:15.439451  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:17.939505  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:20.439836  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:22.938740  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:24.939630  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:27.439712  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:29.939146  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:32.439187  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:34.439528  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:36.939450  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:39.438925  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:41.439158  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:43.440112  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:45.939050  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:47.939118  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:49.939338  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:52.439020  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:54.439255  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:56.939471  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:21:59.438970  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:01.439410  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:03.439747  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:05.939704  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:08.439258  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:10.939241  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:13.439644  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:15.939011  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:18.439077  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:20.439255  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:22.439645  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:24.938780  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:26.939148  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:29.439156  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:31.439554  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:33.939040  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:36.439185  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:38.939474  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:41.439595  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:43.439860  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:45.938954  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:48.439103  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:50.939266  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:52.939428  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:55.439627  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:57.939670  293188 pod_ready.go:102] pod "coredns-64897985d-zvglg" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:06:06 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:22:58.435823  293188 pod_ready.go:81] duration metric: took 4m0.002987778s waiting for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" ...
	E0412 20:22:58.435854  293188 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-zvglg" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:22:58.435889  293188 pod_ready.go:38] duration metric: took 4m0.00926918s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:22:58.435924  293188 kubeadm.go:605] restartCluster took 4m15.431156944s
	W0412 20:22:58.436101  293188 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:22:58.436140  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:23:00.308017  293188 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.871849788s)
	I0412 20:23:00.308112  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:00.320139  293188 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:23:00.327966  293188 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:23:00.328042  293188 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:23:00.336326  293188 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:23:00.336368  293188 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:23:00.611970  293188 out.go:203]   - Generating certificates and keys ...
	I0412 20:23:01.168395  293188 out.go:203]   - Booting up control plane ...
	I0412 20:23:12.717153  293188 out.go:203]   - Configuring RBAC rules ...
	I0412 20:23:13.131342  293188 cni.go:93] Creating CNI manager for ""
	I0412 20:23:13.131368  293188 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:23:13.133726  293188 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:23:13.133819  293188 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:23:13.137703  293188 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:23:13.137723  293188 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:23:13.151266  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:23:13.779496  293188 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:23:13.779592  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.779602  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=embed-certs-20220412200510-42006 minikube.k8s.io/updated_at=2022_04_12T20_23_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.844319  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:13.844349  293188 ops.go:34] apiserver oom_adj: -16
	I0412 20:23:14.416398  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:14.915875  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.416596  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:15.916799  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.416204  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:16.916796  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.416351  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:17.916642  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.416704  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:18.916121  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.415863  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:19.915946  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.416316  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:20.916222  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.416859  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:21.916573  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.415915  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:22.915956  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:23.416356  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:23.916733  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:24.415894  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:24.916772  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:25.416205  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:25.916674  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.416183  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.916867  293188 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:23:26.975833  293188 kubeadm.go:1020] duration metric: took 13.196293095s to wait for elevateKubeSystemPrivileges.
	I0412 20:23:26.975874  293188 kubeadm.go:393] StartCluster complete in 4m44.018219722s
	I0412 20:23:26.975896  293188 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:26.976012  293188 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:23:26.978211  293188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:23:27.500701  293188 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20220412200510-42006" rescaled to 1
	I0412 20:23:27.500763  293188 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:23:27.503023  293188 out.go:176] * Verifying Kubernetes components...
	I0412 20:23:27.500837  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:23:27.503093  293188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:23:27.500871  293188 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:23:27.503173  293188 addons.go:65] Setting storage-provisioner=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.501024  293188 config.go:178] Loaded profile config "embed-certs-20220412200510-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:23:27.503205  293188 addons.go:65] Setting metrics-server=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503209  293188 addons.go:65] Setting default-storageclass=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503216  293188 addons.go:153] Setting addon metrics-server=true in "embed-certs-20220412200510-42006"
	I0412 20:23:27.503190  293188 addons.go:65] Setting dashboard=true in profile "embed-certs-20220412200510-42006"
	I0412 20:23:27.503256  293188 addons.go:153] Setting addon dashboard=true in "embed-certs-20220412200510-42006"
	I0412 20:23:27.503196  293188 addons.go:153] Setting addon storage-provisioner=true in "embed-certs-20220412200510-42006"
	W0412 20:23:27.503276  293188 addons.go:165] addon dashboard should already be in state true
	W0412 20:23:27.503282  293188 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:23:27.503325  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.503325  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	W0412 20:23:27.503229  293188 addons.go:165] addon metrics-server should already be in state true
	I0412 20:23:27.503228  293188 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20220412200510-42006"
	I0412 20:23:27.503589  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.503804  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.503948  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.503973  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.504031  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.516146  293188 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:23:27.550686  293188 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:23:27.550784  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:23:27.550803  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:23:27.550859  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.556204  293188 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:23:27.556346  293188 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:27.556362  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:23:27.556409  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.560689  293188 addons.go:153] Setting addon default-storageclass=true in "embed-certs-20220412200510-42006"
	W0412 20:23:27.560742  293188 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:23:27.560776  293188 host.go:66] Checking if "embed-certs-20220412200510-42006" exists ...
	I0412 20:23:27.561846  293188 cli_runner.go:164] Run: docker container inspect embed-certs-20220412200510-42006 --format={{.State.Status}}
	I0412 20:23:27.563827  293188 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:23:27.566302  293188 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:23:27.566378  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:23:27.566390  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:23:27.566448  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.595498  293188 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:23:27.598031  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.600994  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.616248  293188 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:27.616282  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:23:27.616343  293188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-20220412200510-42006
	I0412 20:23:27.627801  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.656490  293188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49432 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/embed-certs-20220412200510-42006/id_rsa Username:docker}
	I0412 20:23:27.738871  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:23:27.787800  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:23:27.787831  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:23:27.791933  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:23:27.791958  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:23:27.797765  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:23:27.803394  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:23:27.803425  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:23:27.808640  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:23:27.808666  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:23:27.892163  293188 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:27.892195  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:23:27.896562  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:23:27.896592  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:23:27.901548  293188 start.go:777] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0412 20:23:27.979768  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:23:27.980178  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:23:27.980200  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:23:28.001603  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:23:28.001637  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:23:28.086251  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:23:28.086331  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:23:28.102562  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:23:28.102631  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:23:28.179329  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:23:28.179360  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:23:28.201845  293188 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:28.201898  293188 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:23:28.292511  293188 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:23:28.699642  293188 addons.go:386] Verifying addon metrics-server=true in "embed-certs-20220412200510-42006"
	I0412 20:23:29.110155  293188 out.go:176] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0412 20:23:29.110184  293188 addons.go:417] enableAddons completed in 1.609328567s
	I0412 20:23:29.529851  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:32.030061  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:34.030385  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:36.529738  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:39.029385  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:41.030287  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:43.030360  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:45.530065  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:47.530314  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:49.530546  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:52.030189  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:54.529461  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:56.530043  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:23:59.029436  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:01.029972  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:03.530117  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:05.530287  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:08.029993  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:10.529708  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:12.530227  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:15.030365  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:17.529883  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:20.030387  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:22.529841  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:25.029353  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:27.029951  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:29.529761  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:31.529947  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:34.029808  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:36.030055  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:38.529175  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:40.529796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:43.030151  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:45.529652  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:47.530080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:50.029611  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:52.029988  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:54.529864  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:56.530329  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:24:59.030173  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:01.529575  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:03.529634  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:05.530147  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:08.030263  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:10.529544  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:12.529795  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:15.029585  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:17.029751  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:19.529776  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:22.030036  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:24.030201  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:26.529335  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:28.529505  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:30.530072  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:33.029740  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:35.030132  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:37.030268  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:39.030717  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:22.030056  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:24.529656  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.029850  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.532457  293188 node_ready.go:38] duration metric: took 4m0.016261704s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:27:27.535074  293188 out.go:176] 
	W0412 20:27:27.535184  293188 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:27.535195  293188 out.go:241] * 
	* 
	W0412 20:27:27.535868  293188 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:27.537242  293188 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:243: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p embed-certs-20220412200510-42006 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220412200510-42006
helpers_test.go:235: (dbg) docker inspect embed-certs-20220412200510-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7",
	        "Created": "2022-04-12T20:05:23.305199436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293455,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:18:26.612502116Z",
	            "FinishedAt": "2022-04-12T20:18:25.329747162Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hosts",
	        "LogPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7-json.log",
	        "Name": "/embed-certs-20220412200510-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220412200510-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220412200510-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220412200510-42006",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220412200510-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220412200510-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63951c837bc4cbec77dc92e6cf6cbd1c5d6291277afb0821214e3e674d933846",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/63951c837bc4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220412200510-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "340eb3625ebd",
	                        "embed-certs-20220412200510-42006"
	                    ],
	                    "NetworkID": "4ace6a0fae231d855dc7c20348778126fda239556e97939a30b4df667ae930f8",
	                    "EndpointID": "d9bb1d4d461f8a5e6941f56ff72265d47d90204c1351eac2c95e6da67e66c2af",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25
helpers_test.go:252: TestStartStop/group/embed-certs/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:13:48 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:28 UTC | Tue, 12 Apr 2022 20:25:29 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:29 UTC | Tue, 12 Apr 2022 20:25:30 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:30 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:40 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:23 UTC | Tue, 12 Apr 2022 20:27:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:25:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:22.030056  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:24.529656  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:21.176971  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:23.676778  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:25.677210  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:27.029850  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.532457  293188 node_ready.go:38] duration metric: took 4m0.016261704s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:27:27.535074  293188 out.go:176] 
	W0412 20:27:27.535184  293188 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:27.535195  293188 out.go:241] * 
	W0412 20:27:27.535868  293188 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	b81c34430cb1e       6de166512aa22       About a minute ago   Running             kindnet-cni               1                   87d378e0d4e49
	35dd0377876c5       6de166512aa22       4 minutes ago        Exited              kindnet-cni               0                   87d378e0d4e49
	85e45673df218       3c53fa8541f95       4 minutes ago        Running             kube-proxy                0                   a78b57801a708
	93ef4fab7f5ad       884d49d6d8c9f       4 minutes ago        Running             kube-scheduler            2                   5bc8e7efde0b6
	a6631d59aa0ff       3fc1d62d65872       4 minutes ago        Running             kube-apiserver            2                   501d4f4e3dfa1
	faccb325c093f       b0c9e5e4dbb14       4 minutes ago        Running             kube-controller-manager   2                   b74d72be2b4d2
	d8ee5605c1944       25f8c7f3da61c       4 minutes ago        Running             etcd                      2                   7f38bf6138d38
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:18:26 UTC, end at Tue 2022-04-12 20:27:28 UTC. --
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.079438949Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:kube-proxy-zbssv,Uid:3dc573e8-739b-47f4-8fe9-dc637330aa09,Namespace:kube-system,Attempt:0,}"
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.095976970Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873 pid=3330
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.097623035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a78b57801a708eed781061b6e2c65d6a730a7892e0e68e7a70a3d5f1bc205ee5 pid=3340
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.158255709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbssv,Uid:3dc573e8-739b-47f4-8fe9-dc637330aa09,Namespace:kube-system,Attempt:0,} returns sandbox id \"a78b57801a708eed781061b6e2c65d6a730a7892e0e68e7a70a3d5f1bc205ee5\""
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.161873848Z" level=info msg="CreateContainer within sandbox \"a78b57801a708eed781061b6e2c65d6a730a7892e0e68e7a70a3d5f1bc205ee5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.176216543Z" level=info msg="CreateContainer within sandbox \"a78b57801a708eed781061b6e2c65d6a730a7892e0e68e7a70a3d5f1bc205ee5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85e45673df2189633dbfdab223666611b928677dbfa8af98b4a47fddf703bf69\""
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.176895712Z" level=info msg="StartContainer for \"85e45673df2189633dbfdab223666611b928677dbfa8af98b4a47fddf703bf69\""
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.255232793Z" level=info msg="StartContainer for \"85e45673df2189633dbfdab223666611b928677dbfa8af98b4a47fddf703bf69\" returns successfully"
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.381349214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-n99zz,Uid:a4a4a20b-4580-4435-bb88-e5f800055b3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\""
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.384182433Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.397112839Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295\""
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.397702467Z" level=info msg="StartContainer for \"35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295\""
	Apr 12 20:23:27 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:23:27.588182019Z" level=info msg="StartContainer for \"35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295\" returns successfully"
	Apr 12 20:24:18 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:24:18.050195980Z" level=error msg="ContainerStatus for \"69626b9d76ff744a82c51e1b00c28a92c0e13c9ffae81cf98f07bd1e8c045825\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69626b9d76ff744a82c51e1b00c28a92c0e13c9ffae81cf98f07bd1e8c045825\": not found"
	Apr 12 20:24:18 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:24:18.050774458Z" level=error msg="ContainerStatus for \"97ce26d1ad79e40945d6e067b716ff44f35d8cdcd3a109cc4221260b4884b98c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97ce26d1ad79e40945d6e067b716ff44f35d8cdcd3a109cc4221260b4884b98c\": not found"
	Apr 12 20:24:18 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:24:18.051234032Z" level=error msg="ContainerStatus for \"62252b7a89bca853a912d5823c05f8d528920323916e4828a07984c17748ffd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62252b7a89bca853a912d5823c05f8d528920323916e4828a07984c17748ffd0\": not found"
	Apr 12 20:24:18 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:24:18.051620014Z" level=error msg="ContainerStatus for \"99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99c30d34ba6769dbe90b18eefcf0db92072e5d977b32371ee959bba91b958dc9\": not found"
	Apr 12 20:26:07 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:07.921916377Z" level=info msg="shim disconnected" id=35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295
	Apr 12 20:26:07 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:07.921986523Z" level=warning msg="cleaning up after shim disconnected" id=35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295 namespace=k8s.io
	Apr 12 20:26:07 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:07.922003860Z" level=info msg="cleaning up dead shim"
	Apr 12 20:26:07 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:07.934109618Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:26:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3832\n"
	Apr 12 20:26:08 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:08.462325569Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Apr 12 20:26:08 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:08.475567690Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"b81c34430cb1e01a16c1e8ce15da130c957aa8e09978f3d5d28604fa71d3179a\""
	Apr 12 20:26:08 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:08.476213514Z" level=info msg="StartContainer for \"b81c34430cb1e01a16c1e8ce15da130c957aa8e09978f3d5d28604fa71d3179a\""
	Apr 12 20:26:08 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:26:08.612439147Z" level=info msg="StartContainer for \"b81c34430cb1e01a16c1e8ce15da130c957aa8e09978f3d5d28604fa71d3179a\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220412200510-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220412200510-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=embed-certs-20220412200510-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_23_13_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:23:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220412200510-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:27:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:23:25 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:23:25 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:23:25 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:23:25 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220412200510-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ce1f241f-9ecd-4653-8279-4a97e0fb4c59
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220412200510-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-n99zz                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-embed-certs-20220412200510-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-controller-manager-embed-certs-20220412200510-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-zbssv                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-embed-certs-20220412200510-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 4m1s   kube-proxy  
	  Normal  Starting                 4m10s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s  kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [d8ee5605c19440f40fed34fa4f74ca552e24853fb32511064fb115ff3859b1e3] <==
	* {"level":"info","ts":"2022-04-12T20:23:07.411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-04-12T20:23:07.412Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-04-12T20:23:07.414Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:23:07.414Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:23:07.414Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:23:07.415Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:23:07.415Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220412200510-42006 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.003Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:23:08.004Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:27:28 up  3:10,  0 users,  load average: 0.95, 0.89, 1.15
	Linux embed-certs-20220412200510-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a6631d59aa0ffe547791d11102163b9ea508acf27460aef1bb5f74efb2bc37f7] <==
	* I0412 20:23:11.707739       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:23:11.712157       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:23:12.339341       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:23:12.945107       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:23:12.953992       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:23:12.964847       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:23:18.086192       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:23:26.625046       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:23:26.723706       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:23:27.320202       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0412 20:23:28.693198       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.104.64.146]
	I0412 20:23:29.090172       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.106.11.108]
	I0412 20:23:29.101401       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.97.44.187]
	W0412 20:23:29.512728       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:23:29.512822       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:23:29.512836       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:24:29.513182       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:24:29.513244       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:24:29.513258       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:26:29.513703       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:26:29.513783       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:26:29.513791       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [faccb325c093f09235bfc3b79a01d41253d94e3f9500d21aca905d6adf7de317] <==
	* E0412 20:23:28.883624       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0412 20:23:28.886438       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:28.886452       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0412 20:23:28.887289       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:28.887294       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0412 20:23:28.892507       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:23:28.892513       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0412 20:23:28.906604       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-dhvbk"
	I0412 20:23:28.980280       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-4f5z8"
	E0412 20:23:56.196257       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:23:56.609613       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:24:26.217566       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:24:26.624956       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:24:56.237097       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:24:56.639444       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:25:26.257796       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:25:26.654366       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:25:56.275637       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:25:56.670877       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:26:26.294019       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:26:26.686393       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:26:56.310789       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:26:56.701567       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:27:26.326793       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:27:26.722379       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [85e45673df2189633dbfdab223666611b928677dbfa8af98b4a47fddf703bf69] <==
	* I0412 20:23:27.293347       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0412 20:23:27.293427       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0412 20:23:27.293491       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:23:27.316695       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:23:27.316725       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:23:27.316732       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:23:27.316753       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:23:27.317207       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:23:27.317856       1 config.go:317] "Starting service config controller"
	I0412 20:23:27.317897       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:23:27.317904       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:23:27.317932       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:23:27.418588       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:23:27.418634       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [93ef4fab7f5ad9f2bafb4768753b511c286cd3b76fc9289ff8377907b9dc61e6] <==
	* W0412 20:23:10.295539       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:10.295566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:23:10.295864       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 20:23:10.295876       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:10.295891       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:10.295895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:23:11.134738       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:23:11.134794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:23:11.227746       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:11.227796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:23:11.271118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:11.271159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 20:23:11.291662       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:11.291701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:23:11.325053       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:23:11.325097       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:23:11.356371       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:23:11.356478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0412 20:23:11.404988       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:11.405037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:23:11.444356       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:23:11.444387       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:23:11.456618       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:11.456653       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0412 20:23:11.689891       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:18:26 UTC, end at Tue 2022-04-12 20:27:28 UTC. --
	Apr 12 20:25:33 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:25:33.310667    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:38 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:25:38.311998    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:43 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:25:43.313069    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:48 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:25:48.313790    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:53 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:25:53.315083    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:25:58 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:25:58.316238    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:03 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:03.317832    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:08 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:08.319428    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:08 embed-certs-20220412200510-42006 kubelet[2910]: I0412 20:26:08.460347    2910 scope.go:110] "RemoveContainer" containerID="35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295"
	Apr 12 20:26:13 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:13.321077    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:18 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:18.322721    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:23 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:23.324261    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:28 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:28.325067    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:33 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:33.326605    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:38 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:38.328230    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:43 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:43.329420    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:48 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:48.331240    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:53 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:53.333007    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:26:58 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:26:58.334255    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:27:03 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:27:03.335246    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:27:08 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:27:08.336686    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:27:13 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:27:13.338439    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:27:18 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:27:18.339369    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:27:23 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:27:23.340575    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:27:28 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:27:28.341551    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8: exit status 1 (69.923741ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-74r7x" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-vmhkr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-dhvbk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-4f5z8" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (543.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (544.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20220412201228-42006 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5
E0412 20:25:54.808256   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:25:58.260166   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:26:07.734707   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:27:10.366347   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p default-k8s-different-port-20220412201228-42006 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5: exit status 80 (9m2.51602433s)

                                                
                                                
-- stdout --
	* [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	* Pulling base image ...
	* Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	* Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	  - kubelet.cni-conf-dir=/etc/cni/net.mk
	* Configuring CNI (Container Networking Interface) ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	  - Using image kubernetesui/dashboard:v2.5.1
	  - Using image k8s.gcr.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICA
gIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0
gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.176971  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:23.676778  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:25.677210  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:28.176545  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:30.177022  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:32.677020  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:35.177243  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:37.677194  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:40.176627  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:42.177209  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:44.677318  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:46.677818  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:49.176630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:51.676722  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:54.176912  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:56.177137  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:58.677009  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:01.177266  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:03.676844  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:06.176674  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:08.177076  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:10.177207  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:12.676641  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:15.176557  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:17.677002  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:19.677697  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:22.176483  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:24.676630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:26.677667  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:29.177357  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:31.677367  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:34.176852  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:36.177402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:38.677164  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:41.177066  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:43.676983  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:46.177366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:48.677127  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:50.677295  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:53.177230  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:55.677228  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:58.176672  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:00.176822  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:02.676739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:04.677056  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:06.677123  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:09.176984  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:11.677277  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:14.176562  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:16.176807  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:18.677182  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:21.177384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:23.677402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:26.176749  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:28.176804  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:30.177721  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:32.676621  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:34.677246  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:36.677802  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:39.176692  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:41.676441  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:43.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:45.677234  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:48.177008  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:50.677510  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:53.177088  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:55.677043  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:58.176812  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:00.177215  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:02.676366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:04.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:06.676719  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:08.677078  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:11.176385  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.176787  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.673973  302775 pod_ready.go:81] duration metric: took 4m0.003097375s waiting for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	E0412 20:30:13.674004  302775 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:30:13.674026  302775 pod_ready.go:38] duration metric: took 4m0.008261536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:30:13.674088  302775 kubeadm.go:605] restartCluster took 4m15.671526358s
	W0412 20:30:13.674261  302775 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:30:13.674296  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:30:15.434543  302775 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.760223538s)
	I0412 20:30:15.434648  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:15.444487  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:30:15.452033  302775 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:30:15.452119  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:30:15.459066  302775 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:30:15.459111  302775 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:30:28.943093  302775 out.go:203]   - Generating certificates and keys ...
	I0412 20:30:28.946723  302775 out.go:203]   - Booting up control plane ...
	I0412 20:30:28.949531  302775 out.go:203]   - Configuring RBAC rules ...
	I0412 20:30:28.951251  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:30:28.951270  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:30:28.954437  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:30:28.954502  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:30:28.958449  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:30:28.958473  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:30:28.972610  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:30:29.581068  302775 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:30:29.581147  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006 minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.581148  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.588127  302775 ops.go:34] apiserver oom_adj: -16
	I0412 20:30:29.648666  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.229416  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.729281  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.229706  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.729052  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.228891  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.729287  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.228878  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.729605  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.729516  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.229278  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.729029  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.228984  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.729282  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.229296  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.729119  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.729302  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.229163  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.728992  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.229522  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.729277  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.228750  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.729285  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.228910  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.729297  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.795666  302775 kubeadm.go:1020] duration metric: took 13.214575797s to wait for elevateKubeSystemPrivileges.
	I0412 20:30:42.795702  302775 kubeadm.go:393] StartCluster complete in 4m44.840593181s
	I0412 20:30:42.795726  302775 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:42.795894  302775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:30:42.797959  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:43.316096  302775 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220412201228-42006" rescaled to 1
	I0412 20:30:43.316236  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:30:43.316267  302775 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:30:43.316330  302775 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316365  302775 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316387  302775 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316392  302775 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316399  302775 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316231  302775 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:30:43.318925  302775 out.go:176] * Verifying Kubernetes components...
	I0412 20:30:43.316370  302775 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.319000  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:43.319019  302775 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316478  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:30:43.316392  302775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.316403  302775 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:30:43.319204  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.316409  302775 addons.go:165] addon metrics-server should already be in state true
	I0412 20:30:43.319309  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.319076  302775 addons.go:165] addon dashboard should already be in state true
	I0412 20:30:43.319411  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.319521  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319712  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319812  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319884  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.368004  302775 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369733  302775 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:30:43.368143  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:30:43.369830  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:30:43.371713  302775 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369909  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.371811  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:30:43.371829  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:30:43.371894  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.373558  302775 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:30:43.373752  302775 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.373772  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:30:43.373846  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.384370  302775 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.384406  302775 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:30:43.384440  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.384946  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.415524  302775 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:30:43.415635  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:30:43.419849  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.421835  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.422931  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.441543  302775 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.441567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:30:43.441611  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.477201  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.584023  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.594296  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:30:43.594323  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:30:43.594540  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:30:43.594567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:30:43.597433  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.611081  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:30:43.611109  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:30:43.612709  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:30:43.612735  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:30:43.695590  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:30:43.695620  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:30:43.695871  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.695896  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:30:43.713161  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.783491  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:30:43.783522  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:30:43.786723  302775 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 20:30:43.804035  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:30:43.804161  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:30:43.880364  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:30:43.880416  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:30:43.898688  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:30:43.898715  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:30:43.979407  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:30:43.979444  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:30:44.000255  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.000283  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:30:44.102994  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.494063  302775 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:44.918251  302775 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:30:44.918280  302775 addons.go:417] enableAddons completed in 1.602020138s
	I0412 20:30:45.423200  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:47.923285  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:50.422835  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:52.923459  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:55.422462  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:57.923268  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:00.422559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:02.422789  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:04.422907  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:06.923381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:09.422313  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:11.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:13.922722  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:16.423078  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:18.423314  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:20.923142  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:22.923173  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:24.923329  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:27.423082  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:29.922381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:31.922796  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:33.923653  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:36.422332  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:38.423001  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:40.922454  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:42.923084  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:45.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:47.922302  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:49.924482  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:52.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:54.922902  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:56.923448  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:59.422807  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:01.422968  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:03.923510  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:06.422160  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:08.423365  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:10.922571  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:12.922895  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:14.923501  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:17.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:19.922939  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:22.421806  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:24.422759  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:26.423058  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:28.922712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:30.922856  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:33.422864  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:35.923228  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:38.423092  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:40.922749  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:42.923323  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:45.422441  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:47.423052  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:49.922914  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:51.923513  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:54.422949  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:56.423035  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:58.923416  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:01.422712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:03.422921  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:05.923038  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:08.422910  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:10.923412  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:13.423048  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:15.922494  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:17.923130  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:19.923551  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:22.422029  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:24.422643  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:26.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:28.923212  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:31.422303  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:33.423218  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:35.923095  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:38.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:40.423119  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:42.924176  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:45.422942  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:47.923152  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:50.422822  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:52.923237  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:55.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:57.923053  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:59.923203  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:01.923370  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:04.422633  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:06.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:09.422887  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:11.423344  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:13.922945  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:16.423257  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:18.922588  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:20.923031  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:23.423271  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:25.423373  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:27.922498  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:29.922791  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:31.922929  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:34.423381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:36.923060  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:38.923113  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:41.422479  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.422840  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.425257  302775 node_ready.go:38] duration metric: took 4m0.009696502s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:34:43.428510  302775 out.go:176] 
	W0412 20:34:43.428724  302775 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:34:43.428749  302775 out.go:241] * 
	* 
	W0412 20:34:43.429581  302775 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:34:43.431943  302775 out.go:176] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:243: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p default-k8s-different-port-20220412201228-42006 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.5": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220412201228-42006
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220412201228-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f",
	        "Created": "2022-04-12T20:12:37.404174744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303040,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:25:41.726729323Z",
	            "FinishedAt": "2022-04-12T20:25:40.439971944Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hosts",
	        "LogPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f-json.log",
	        "Name": "/default-k8s-different-port-20220412201228-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220412201228-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220412201228-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220412201228-42006",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220412201228-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220412201228-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f31167bd0056875e8d61db40d68ea99f4fbde39279c09c9f9b944b997d42ff3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2f31167bd005",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220412201228-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6642b489f963",
	                        "default-k8s-different-port-20220412201228-42006"
	                    ],
	                    "NetworkID": "e1e5eb80641804e0cf03f9ee1959284f2ec05fd6c94f6b6eb19931fc6032414c",
	                    "EndpointID": "262480d183484a7442b9cbdbeef064e40a773ac2bbccc3622cac03a2bef59cce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:08 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:28 UTC | Tue, 12 Apr 2022 20:25:29 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:29 UTC | Tue, 12 Apr 2022 20:25:30 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:30 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:40 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:23 UTC | Tue, 12 Apr 2022 20:27:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:28 UTC | Tue, 12 Apr 2022 20:27:28 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:25:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:22.030056  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:24.529656  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:21.176971  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:23.676778  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:25.677210  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:27.029850  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.532457  293188 node_ready.go:38] duration metric: took 4m0.016261704s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:27:27.535074  293188 out.go:176] 
	W0412 20:27:27.535184  293188 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:27.535195  293188 out.go:241] * 
	W0412 20:27:27.535868  293188 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:28.176545  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:30.177022  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:32.677020  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:35.177243  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:37.677194  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:40.176627  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:42.177209  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:44.677318  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:46.677818  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:49.176630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:51.676722  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:54.176912  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:56.177137  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:58.677009  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:01.177266  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:03.676844  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:06.176674  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:08.177076  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:10.177207  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:12.676641  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:15.176557  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:17.677002  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:19.677697  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:22.176483  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:24.676630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:26.677667  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:29.177357  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:31.677367  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:34.176852  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:36.177402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:38.677164  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:41.177066  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:43.676983  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:46.177366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:48.677127  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:50.677295  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:53.177230  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:55.677228  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:58.176672  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:00.176822  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:02.676739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:04.677056  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:06.677123  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:09.176984  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:11.677277  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:14.176562  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:16.176807  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:18.677182  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:21.177384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:23.677402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:26.176749  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:28.176804  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:30.177721  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:32.676621  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:34.677246  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:36.677802  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:39.176692  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:41.676441  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:43.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:45.677234  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:48.177008  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:50.677510  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:53.177088  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:55.677043  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:58.176812  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:00.177215  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:02.676366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:04.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:06.676719  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:08.677078  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:11.176385  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.176787  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.673973  302775 pod_ready.go:81] duration metric: took 4m0.003097375s waiting for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	E0412 20:30:13.674004  302775 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:30:13.674026  302775 pod_ready.go:38] duration metric: took 4m0.008261536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:30:13.674088  302775 kubeadm.go:605] restartCluster took 4m15.671526358s
	W0412 20:30:13.674261  302775 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:30:13.674296  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:30:15.434543  302775 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.760223538s)
	I0412 20:30:15.434648  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:15.444487  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:30:15.452033  302775 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:30:15.452119  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:30:15.459066  302775 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:30:15.459111  302775 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:30:28.943093  302775 out.go:203]   - Generating certificates and keys ...
	I0412 20:30:28.946723  302775 out.go:203]   - Booting up control plane ...
	I0412 20:30:28.949531  302775 out.go:203]   - Configuring RBAC rules ...
	I0412 20:30:28.951251  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:30:28.951270  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:30:28.954437  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:30:28.954502  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:30:28.958449  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:30:28.958473  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:30:28.972610  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:30:29.581068  302775 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:30:29.581147  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006 minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.581148  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.588127  302775 ops.go:34] apiserver oom_adj: -16
	I0412 20:30:29.648666  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.229416  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.729281  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.229706  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.729052  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.228891  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.729287  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.228878  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.729605  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.729516  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.229278  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.729029  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.228984  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.729282  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.229296  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.729119  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.729302  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.229163  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.728992  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.229522  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.729277  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.228750  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.729285  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.228910  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.729297  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.795666  302775 kubeadm.go:1020] duration metric: took 13.214575797s to wait for elevateKubeSystemPrivileges.
	I0412 20:30:42.795702  302775 kubeadm.go:393] StartCluster complete in 4m44.840593181s
	I0412 20:30:42.795726  302775 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:42.795894  302775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:30:42.797959  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:43.316096  302775 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220412201228-42006" rescaled to 1
	I0412 20:30:43.316236  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:30:43.316267  302775 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:30:43.316330  302775 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316365  302775 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316387  302775 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316392  302775 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316399  302775 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316231  302775 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:30:43.318925  302775 out.go:176] * Verifying Kubernetes components...
	I0412 20:30:43.316370  302775 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.319000  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:43.319019  302775 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316478  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:30:43.316392  302775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.316403  302775 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:30:43.319204  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.316409  302775 addons.go:165] addon metrics-server should already be in state true
	I0412 20:30:43.319309  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.319076  302775 addons.go:165] addon dashboard should already be in state true
	I0412 20:30:43.319411  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.319521  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319712  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319812  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319884  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.368004  302775 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369733  302775 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:30:43.368143  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:30:43.369830  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:30:43.371713  302775 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369909  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.371811  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:30:43.371829  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:30:43.371894  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.373558  302775 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:30:43.373752  302775 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.373772  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:30:43.373846  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.384370  302775 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.384406  302775 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:30:43.384440  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.384946  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.415524  302775 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:30:43.415635  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:30:43.419849  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.421835  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.422931  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.441543  302775 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.441567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:30:43.441611  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.477201  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.584023  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.594296  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:30:43.594323  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:30:43.594540  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:30:43.594567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:30:43.597433  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.611081  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:30:43.611109  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:30:43.612709  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:30:43.612735  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:30:43.695590  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:30:43.695620  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:30:43.695871  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.695896  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:30:43.713161  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.783491  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:30:43.783522  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:30:43.786723  302775 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 20:30:43.804035  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:30:43.804161  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:30:43.880364  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:30:43.880416  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:30:43.898688  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:30:43.898715  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:30:43.979407  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:30:43.979444  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:30:44.000255  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.000283  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:30:44.102994  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.494063  302775 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:44.918251  302775 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:30:44.918280  302775 addons.go:417] enableAddons completed in 1.602020138s
	I0412 20:30:45.423200  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:47.923285  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:50.422835  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:52.923459  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:55.422462  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:57.923268  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:00.422559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:02.422789  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:04.422907  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:06.923381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:09.422313  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:11.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:13.922722  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:16.423078  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:18.423314  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:20.923142  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:22.923173  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:24.923329  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:27.423082  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:29.922381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:31.922796  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:33.923653  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:36.422332  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:38.423001  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:40.922454  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:42.923084  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:45.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:47.922302  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:49.924482  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:52.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:54.922902  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:56.923448  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:59.422807  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:01.422968  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:03.923510  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:06.422160  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:08.423365  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:10.922571  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:12.922895  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:14.923501  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:17.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:19.922939  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:22.421806  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:24.422759  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:26.423058  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:28.922712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:30.922856  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:33.422864  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:35.923228  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:38.423092  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:40.922749  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:42.923323  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:45.422441  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:47.423052  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:49.922914  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:51.923513  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:54.422949  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:56.423035  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:58.923416  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:01.422712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:03.422921  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:05.923038  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:08.422910  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:10.923412  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:13.423048  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:15.922494  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:17.923130  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:19.923551  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:22.422029  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:24.422643  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:26.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:28.923212  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:31.422303  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:33.423218  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:35.923095  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:38.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:40.423119  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:42.924176  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:45.422942  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:47.923152  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:50.422822  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:52.923237  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:55.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:57.923053  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:59.923203  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:01.923370  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:04.422633  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:06.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:09.422887  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:11.423344  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:13.922945  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:16.423257  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:18.922588  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:20.923031  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:23.423271  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:25.423373  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:27.922498  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:29.922791  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:31.922929  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:34.423381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:36.923060  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:38.923113  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:41.422479  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.422840  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.425257  302775 node_ready.go:38] duration metric: took 4m0.009696502s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:34:43.428510  302775 out.go:176] 
	W0412 20:34:43.428724  302775 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:34:43.428749  302775 out.go:241] * 
	W0412 20:34:43.429581  302775 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	cb781bd82f1bd       6de166512aa22       18 seconds ago      Exited              kindnet-cni               5                   e7f85670aab62
	e482baaa02b92       3c53fa8541f95       4 minutes ago       Running             kube-proxy                0                   7a88fea74a24c
	270d41bcba3e1       3fc1d62d65872       4 minutes ago       Running             kube-apiserver            2                   135f4c9f6133c
	93c8ad43087d3       b0c9e5e4dbb14       4 minutes ago       Running             kube-controller-manager   2                   18279564d681b
	34e686863f9b5       884d49d6d8c9f       4 minutes ago       Running             kube-scheduler            2                   159717a64a264
	c4cb54a089e01       25f8c7f3da61c       4 minutes ago       Running             etcd                      2                   5a70dacd4ef7d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:25:42 UTC, end at Tue 2022-04-12 20:34:44 UTC. --
	Apr 12 20:32:02 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:02.424581069Z" level=warning msg="cleaning up after shim disconnected" id=ff8f7719bef9dc458e22c7f756f7998bd1a3cea1b1842683332ab37618f85f73 namespace=k8s.io
	Apr 12 20:32:02 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:02.424594769Z" level=info msg="cleaning up dead shim"
	Apr 12 20:32:02 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:02.435663899Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:32:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4246\n"
	Apr 12 20:32:03 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:03.232131225Z" level=info msg="RemoveContainer for \"5ddf6bac340eacafc25283b47b56176da6f3768012e8265bce5fb9efd9ee520d\""
	Apr 12 20:32:03 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:03.236878223Z" level=info msg="RemoveContainer for \"5ddf6bac340eacafc25283b47b56176da6f3768012e8265bce5fb9efd9ee520d\" returns successfully"
	Apr 12 20:32:44 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:44.918396262Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Apr 12 20:32:44 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:44.934616600Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\""
	Apr 12 20:32:44 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:44.935307085Z" level=info msg="StartContainer for \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\""
	Apr 12 20:32:45 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:45.084064267Z" level=info msg="StartContainer for \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\" returns successfully"
	Apr 12 20:32:55 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:55.324641347Z" level=info msg="shim disconnected" id=3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1
	Apr 12 20:32:55 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:55.324696959Z" level=warning msg="cleaning up after shim disconnected" id=3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1 namespace=k8s.io
	Apr 12 20:32:55 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:55.324707398Z" level=info msg="cleaning up dead shim"
	Apr 12 20:32:55 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:55.336266086Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:32:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4332\n"
	Apr 12 20:32:56 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:56.325676246Z" level=info msg="RemoveContainer for \"ff8f7719bef9dc458e22c7f756f7998bd1a3cea1b1842683332ab37618f85f73\""
	Apr 12 20:32:56 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:32:56.333107440Z" level=info msg="RemoveContainer for \"ff8f7719bef9dc458e22c7f756f7998bd1a3cea1b1842683332ab37618f85f73\" returns successfully"
	Apr 12 20:34:25 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:25.919414870Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:5,}"
	Apr 12 20:34:25 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:25.936757137Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for &ContainerMetadata{Name:kindnet-cni,Attempt:5,} returns container id \"cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda\""
	Apr 12 20:34:25 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:25.937418866Z" level=info msg="StartContainer for \"cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda\""
	Apr 12 20:34:26 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:26.083962966Z" level=info msg="StartContainer for \"cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda\" returns successfully"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.321762692Z" level=info msg="shim disconnected" id=cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.321844074Z" level=warning msg="cleaning up after shim disconnected" id=cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda namespace=k8s.io
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.321861395Z" level=info msg="cleaning up dead shim"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.332784784Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:34:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4419\n"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.498731718Z" level=info msg="RemoveContainer for \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\""
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.503600658Z" level=info msg="RemoveContainer for \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220412201228-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220412201228-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:30:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220412201228-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:34:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:30:41 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:30:41 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:30:41 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:30:41 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220412201228-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ef825856-4086-4c06-9629-95bede787d92
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220412201228-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         4m10s
	  kube-system                 kindnet-hj8ss                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m2s
	  kube-system                 kube-apiserver-default-k8s-different-port-20220412201228-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220412201228-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  kube-system                 kube-proxy-6qsrn                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m2s
	  kube-system                 kube-scheduler-default-k8s-different-port-20220412201228-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 4m1s                   kube-proxy  
	  Normal  NodeHasSufficientMemory  4m22s (x5 over 4m23s)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m23s)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m23s)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m11s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m11s                  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m11s                  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m11s                  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [c4cb54a089e016fb617de68b938a6dc5f4fb174e64fbcd0bd528a56465898a39] <==
	* {"level":"info","ts":"2022-04-12T20:30:22.904Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2022-04-12T20:30:22.905Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220412201228-42006 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.097Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:30:23.097Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	
	* 
	* ==> kernel <==
	*  20:34:44 up  3:17,  0 users,  load average: 0.33, 0.56, 0.90
	Linux default-k8s-different-port-20220412201228-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [270d41bcba3e1865495674af56cd4330a1e32e7c91d1b01dfd4ff7473395e341] <==
	* I0412 20:30:27.326139       1 controller.go:611] quota admission added evaluator for: endpoints
	I0412 20:30:27.330248       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0412 20:30:27.849720       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0412 20:30:28.743272       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0412 20:30:28.754085       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0412 20:30:28.764848       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0412 20:30:33.886760       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0412 20:30:41.777087       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0412 20:30:42.375168       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0412 20:30:43.019422       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0412 20:30:44.486655       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs=map[IPv4:10.104.87.45]
	I0412 20:30:44.897894       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.103.52.233]
	I0412 20:30:44.909516       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.108.219.99]
	W0412 20:30:45.386225       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:30:45.386312       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:30:45.386325       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:31:45.386833       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:31:45.386899       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:31:45.386907       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:33:45.387202       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:33:45.387291       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:33:45.387300       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [93c8ad43087d3210b37b054a5ce8ed0bb95d75d9a5620ef164f8434c299fc123] <==
	* E0412 20:30:44.779480       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:30:44.782084       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0412 20:30:44.782105       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0412 20:30:44.783325       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-8469778f77" failed with pods "kubernetes-dashboard-8469778f77-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:30:44.783327       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8469778f77-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0412 20:30:44.789875       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" failed with pods "dashboard-metrics-scraper-56974995fc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0412 20:30:44.789955       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-56974995fc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0412 20:30:44.791233       1 event.go:294] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8469778f77" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8469778f77-xs557"
	I0412 20:30:44.807361       1 event.go:294] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-56974995fc" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-56974995fc-wwmdw"
	E0412 20:31:11.845408       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:31:12.261477       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:31:41.862318       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:31:42.275860       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:32:11.880095       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:32:12.291783       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:32:41.898306       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:32:42.308745       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:33:11.915378       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:33:12.326722       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:33:41.932844       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:33:42.343038       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:34:11.949070       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:34:12.358600       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:34:41.963920       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:34:42.373396       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e482baaa02b921af7a2d84713ae74d5e73f0045c7b5566cd1ca264037643afe1] <==
	* I0412 20:30:42.989823       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0412 20:30:42.989877       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0412 20:30:42.989910       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:30:43.015484       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:30:43.015529       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:30:43.015541       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:30:43.015557       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:30:43.016055       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:30:43.016766       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:30:43.016778       1 config.go:317] "Starting service config controller"
	I0412 20:30:43.016800       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:30:43.016801       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:30:43.116983       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:30:43.117015       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [34e686863f9b57d62f2cdd74d8adf7722e557fbf0077f3795f13ef4ae0783c90] <==
	* W0412 20:30:25.801595       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:25.801613       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:25.800817       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:30:25.802008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:30:25.803969       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:25.803999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:26.623962       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:30:26.623997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0412 20:30:26.672428       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:30:26.672463       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:30:26.831990       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:26.832034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:26.832832       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:30:26.832862       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:30:26.858524       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:30:26.858562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:30:26.880852       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:26.880893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:26.921532       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:30:26.921580       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:30:26.948831       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:30:26.948873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 20:30:27.080498       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0412 20:30:27.080530       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0412 20:30:29.997243       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:25:42 UTC, end at Tue 2022-04-12 20:34:44 UTC. --
	Apr 12 20:33:34 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:34.122831    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:33:39 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:39.123678    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:33:44 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:44.125242    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:33:47 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:33:47.916328    3117 scope.go:110] "RemoveContainer" containerID="3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1"
	Apr 12 20:33:47 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:47.916693    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:33:49 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:49.126137    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:33:54 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:54.126980    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:33:59 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:33:59.128569    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:01 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:34:01.916310    3117 scope.go:110] "RemoveContainer" containerID="3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1"
	Apr 12 20:34:01 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:01.916734    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:34:04 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:04.129426    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:09 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:09.130414    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:13 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:34:13.916949    3117 scope.go:110] "RemoveContainer" containerID="3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1"
	Apr 12 20:34:13 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:13.917238    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:34:14 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:14.131755    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:19 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:19.133491    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:24 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:24.135041    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:25 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:34:25.917028    3117 scope.go:110] "RemoveContainer" containerID="3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1"
	Apr 12 20:34:29 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:29.136022    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:34 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:34.136731    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:34:36.497062    3117 scope.go:110] "RemoveContainer" containerID="3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:34:36.497524    3117 scope.go:110] "RemoveContainer" containerID="cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:36.497878    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:34:39 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:39.138162    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:34:44 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:34:44.139431    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/SecondStart]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557: exit status 1 (71.014281ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-979gq" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-splbx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-wwmdw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-xs557" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/SecondStart (544.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-6fb5469cf5-k6tnl" [3c7be9cb-7736-41c0-9d34-16edf7674193] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:34:25.721061   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:35:13.412687   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:35:31.558262   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:35:58.259921   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:258: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
start_stop_delete_test.go:258: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-04-12 20:36:25.624210838 +0000 UTC m=+4564.600509241
start_stop_delete_test.go:258: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe po kubernetes-dashboard-6fb5469cf5-k6tnl -n kubernetes-dashboard
start_stop_delete_test.go:258: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 describe po kubernetes-dashboard-6fb5469cf5-k6tnl -n kubernetes-dashboard: context deadline exceeded (2.044µs)
start_stop_delete_test.go:258: kubectl --context old-k8s-version-20220412200421-42006 describe po kubernetes-dashboard-6fb5469cf5-k6tnl -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:258: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 logs kubernetes-dashboard-6fb5469cf5-k6tnl -n kubernetes-dashboard
start_stop_delete_test.go:258: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 logs kubernetes-dashboard-6fb5469cf5-k6tnl -n kubernetes-dashboard: context deadline exceeded (98ns)
start_stop_delete_test.go:258: kubectl --context old-k8s-version-20220412200421-42006 logs kubernetes-dashboard-6fb5469cf5-k6tnl -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:259: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-20220412200421-42006

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:235: (dbg) docker inspect old-k8s-version-20220412200421-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42",
	        "Created": "2022-04-12T20:04:30.270409412Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:17:29.938914583Z",
	            "FinishedAt": "2022-04-12T20:17:28.601618224Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hostname",
	        "HostsPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/hosts",
	        "LogPath": "/var/lib/docker/containers/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42/a5e4ff2bbf6e0c1f98d862b7c5909f328d958a622c77ca8f2a1aeb8757f4bc42-json.log",
	        "Name": "/old-k8s-version-20220412200421-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-20220412200421-42006:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20220412200421-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7832f59e03daf68e56b6521f25b5ed3223d02619c327fdde0f78d7822640d042/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20220412200421-42006",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20220412200421-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20220412200421-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20220412200421-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53e31e93073e14b87893ecc02eec943a790f513e23d81081fb89673144f54f48",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49426"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49423"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49424"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/53e31e93073e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20220412200421-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "a5e4ff2bbf6e",
	                        "old-k8s-version-20220412200421-42006"
	                    ],
	                    "NetworkID": "0b96a6a249d72d5fff5d5b9db029edbfc6a07a56e8064108c65000591927cbc6",
	                    "EndpointID": "6781e09d44ca1ec39a13b240ba7487d8f08130968a667575f2ffa3cc79c9fd8d",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20220412200421-42006 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p newest-cni-20220412201253-42006 --memory=2200           | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:08 UTC | Tue, 12 Apr 2022 20:14:42 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                 |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                 |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                 |         |         |                               |                               |
	|         | --driver=docker  --container-runtime=containerd            |                                                 |         |         |                               |                               |
	|         | --kubernetes-version=v1.23.6-rc.0                          |                                                 |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:43 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                 |         |         |                               |                               |
	| pause   | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| unpause | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                         | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                            |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006                       |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                           |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:28 UTC | Tue, 12 Apr 2022 20:25:29 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:29 UTC | Tue, 12 Apr 2022 20:25:30 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                 |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:30 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:40 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006            |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006                       | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:23 UTC | Tue, 12 Apr 2022 20:27:24 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                           | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:28 UTC | Tue, 12 Apr 2022 20:27:28 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006            | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:34:43 UTC | Tue, 12 Apr 2022 20:34:44 UTC |
	|         | logs -n 25                                                 |                                                 |         |         |                               |                               |
	|---------|------------------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:25:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:22.030056  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:24.529656  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:21.176971  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:23.676778  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:25.677210  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:27.029850  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.532457  293188 node_ready.go:38] duration metric: took 4m0.016261704s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:27:27.535074  293188 out.go:176] 
	W0412 20:27:27.535184  293188 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:27.535195  293188 out.go:241] * 
	W0412 20:27:27.535868  293188 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:28.176545  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:30.177022  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:32.677020  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:35.177243  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:37.677194  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:40.176627  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:42.177209  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:44.677318  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:46.677818  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:49.176630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:51.676722  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:54.176912  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:56.177137  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:58.677009  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:01.177266  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:03.676844  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:06.176674  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:08.177076  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:10.177207  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:12.676641  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:15.176557  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:17.677002  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:19.677697  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:22.176483  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:24.676630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:26.677667  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:29.177357  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:31.677367  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:34.176852  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:36.177402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:38.677164  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:41.177066  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:43.676983  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:46.177366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:48.677127  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:50.677295  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:53.177230  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:55.677228  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:58.176672  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:00.176822  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:02.676739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:04.677056  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:06.677123  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:09.176984  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:11.677277  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:14.176562  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:16.176807  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:18.677182  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:21.177384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:23.677402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:26.176749  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:28.176804  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:30.177721  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:32.676621  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:34.677246  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:36.677802  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:39.176692  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:41.676441  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:43.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:45.677234  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:48.177008  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:50.677510  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:53.177088  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:55.677043  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:58.176812  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:00.177215  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:02.676366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:04.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:06.676719  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:08.677078  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:11.176385  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.176787  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.673973  302775 pod_ready.go:81] duration metric: took 4m0.003097375s waiting for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	E0412 20:30:13.674004  302775 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:30:13.674026  302775 pod_ready.go:38] duration metric: took 4m0.008261536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:30:13.674088  302775 kubeadm.go:605] restartCluster took 4m15.671526358s
	W0412 20:30:13.674261  302775 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:30:13.674296  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:30:15.434543  302775 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.760223538s)
	I0412 20:30:15.434648  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:15.444487  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:30:15.452033  302775 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:30:15.452119  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:30:15.459066  302775 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:30:15.459111  302775 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:30:28.943093  302775 out.go:203]   - Generating certificates and keys ...
	I0412 20:30:28.946723  302775 out.go:203]   - Booting up control plane ...
	I0412 20:30:28.949531  302775 out.go:203]   - Configuring RBAC rules ...
	I0412 20:30:28.951251  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:30:28.951270  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:30:28.954437  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:30:28.954502  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:30:28.958449  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:30:28.958473  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:30:28.972610  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:30:29.581068  302775 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:30:29.581147  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006 minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.581148  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.588127  302775 ops.go:34] apiserver oom_adj: -16
	I0412 20:30:29.648666  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.229416  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.729281  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.229706  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.729052  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.228891  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.729287  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.228878  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.729605  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.729516  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.229278  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.729029  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.228984  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.729282  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.229296  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.729119  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.729302  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.229163  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.728992  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.229522  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.729277  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.228750  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.729285  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.228910  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.729297  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.795666  302775 kubeadm.go:1020] duration metric: took 13.214575797s to wait for elevateKubeSystemPrivileges.
	I0412 20:30:42.795702  302775 kubeadm.go:393] StartCluster complete in 4m44.840593181s
	I0412 20:30:42.795726  302775 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:42.795894  302775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:30:42.797959  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:43.316096  302775 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220412201228-42006" rescaled to 1
	I0412 20:30:43.316236  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:30:43.316267  302775 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:30:43.316330  302775 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316365  302775 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316387  302775 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316392  302775 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316399  302775 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316231  302775 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:30:43.318925  302775 out.go:176] * Verifying Kubernetes components...
	I0412 20:30:43.316370  302775 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.319000  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:43.319019  302775 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316478  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:30:43.316392  302775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.316403  302775 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:30:43.319204  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.316409  302775 addons.go:165] addon metrics-server should already be in state true
	I0412 20:30:43.319309  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.319076  302775 addons.go:165] addon dashboard should already be in state true
	I0412 20:30:43.319411  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.319521  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319712  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319812  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319884  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.368004  302775 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369733  302775 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:30:43.368143  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:30:43.369830  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:30:43.371713  302775 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369909  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.371811  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:30:43.371829  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:30:43.371894  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.373558  302775 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:30:43.373752  302775 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.373772  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:30:43.373846  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.384370  302775 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.384406  302775 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:30:43.384440  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.384946  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.415524  302775 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:30:43.415635  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:30:43.419849  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.421835  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.422931  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.441543  302775 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.441567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:30:43.441611  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.477201  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.584023  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.594296  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:30:43.594323  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:30:43.594540  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:30:43.594567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:30:43.597433  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.611081  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:30:43.611109  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:30:43.612709  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:30:43.612735  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:30:43.695590  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:30:43.695620  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:30:43.695871  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.695896  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:30:43.713161  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.783491  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:30:43.783522  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:30:43.786723  302775 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 20:30:43.804035  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:30:43.804161  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:30:43.880364  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:30:43.880416  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:30:43.898688  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:30:43.898715  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:30:43.979407  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:30:43.979444  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:30:44.000255  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.000283  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:30:44.102994  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.494063  302775 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:44.918251  302775 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:30:44.918280  302775 addons.go:417] enableAddons completed in 1.602020138s
	I0412 20:30:45.423200  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:47.923285  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:50.422835  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:52.923459  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:55.422462  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:57.923268  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:00.422559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:02.422789  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:04.422907  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:06.923381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:09.422313  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:11.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:13.922722  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:16.423078  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:18.423314  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:20.923142  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:22.923173  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:24.923329  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:27.423082  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:29.922381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:31.922796  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:33.923653  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:36.422332  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:38.423001  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:40.922454  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:42.923084  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:45.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:47.922302  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:49.924482  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:52.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:54.922902  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:56.923448  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:59.422807  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:01.422968  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:03.923510  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:06.422160  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:08.423365  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:10.922571  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:12.922895  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:14.923501  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:17.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:19.922939  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:22.421806  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:24.422759  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:26.423058  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:28.922712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:30.922856  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:33.422864  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:35.923228  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:38.423092  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:40.922749  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:42.923323  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:45.422441  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:47.423052  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:49.922914  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:51.923513  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:54.422949  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:56.423035  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:58.923416  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:01.422712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:03.422921  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:05.923038  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:08.422910  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:10.923412  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:13.423048  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:15.922494  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:17.923130  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:19.923551  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:22.422029  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:24.422643  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:26.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:28.923212  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:31.422303  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:33.423218  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:35.923095  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:38.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:40.423119  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:42.924176  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:45.422942  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:47.923152  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:50.422822  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:52.923237  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:55.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:57.923053  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:59.923203  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:01.923370  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:04.422633  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:06.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:09.422887  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:11.423344  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:13.922945  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:16.423257  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:18.922588  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:20.923031  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:23.423271  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:25.423373  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:27.922498  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:29.922791  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:31.922929  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:34.423381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:36.923060  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:38.923113  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:41.422479  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.422840  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.425257  302775 node_ready.go:38] duration metric: took 4m0.009696502s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:34:43.428510  302775 out.go:176] 
	W0412 20:34:43.428724  302775 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:34:43.428749  302775 out.go:241] * 
	W0412 20:34:43.429581  302775 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	8185fef02cc15       6de166512aa22       53 seconds ago      Running             kindnet-cni               4                   cfff760ba8d17
	dc62651792ada       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   cfff760ba8d17
	35cfeab0e4e1d       c21b0c7400f98       13 minutes ago      Running             kube-proxy                0                   44b87fce4f1d0
	899651f5f598c       06a629a7e51cd       13 minutes ago      Running             kube-controller-manager   0                   82e6dfa275719
	43048450227de       b305571ca60a5       13 minutes ago      Running             kube-apiserver            0                   e8c2453c42536
	c74bd61d489ea       b2756210eeabf       13 minutes ago      Running             etcd                      0                   01918d7054f01
	eace48121b7e9       301ddc62b80b1       13 minutes ago      Running             kube-scheduler            0                   8f273a6589233
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:17:30 UTC, end at Tue 2022-04-12 20:36:26 UTC. --
	Apr 12 20:28:44 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:28:44.355752958Z" level=info msg="RemoveContainer for \"d71fcd57fee8b6777852fae0b1b5597d1815543e0967b4f93d5602bab62ff3c0\" returns successfully"
	Apr 12 20:28:54 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:28:54.889845568Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Apr 12 20:28:54 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:28:54.905799245Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27\""
	Apr 12 20:28:54 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:28:54.906382543Z" level=info msg="StartContainer for \"f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27\""
	Apr 12 20:28:55 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:28:55.084275728Z" level=info msg="StartContainer for \"f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27\" returns successfully"
	Apr 12 20:31:35 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:31:35.325080018Z" level=info msg="shim disconnected" id=f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27
	Apr 12 20:31:35 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:31:35.325161673Z" level=warning msg="cleaning up after shim disconnected" id=f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27 namespace=k8s.io
	Apr 12 20:31:35 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:31:35.325178472Z" level=info msg="cleaning up dead shim"
	Apr 12 20:31:35 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:31:35.336179477Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:31:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5757\n"
	Apr 12 20:31:35 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:31:35.573137077Z" level=info msg="RemoveContainer for \"72d9664fbba36142efc1f361b4633b51fbdca60ad76718b907afdf20587df1a5\""
	Apr 12 20:31:35 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:31:35.578956414Z" level=info msg="RemoveContainer for \"72d9664fbba36142efc1f361b4633b51fbdca60ad76718b907afdf20587df1a5\" returns successfully"
	Apr 12 20:32:01 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:32:01.889588544Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Apr 12 20:32:01 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:32:01.903844142Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"dc62651792ada70694213903668b5df4125fb342320e2592a636c32131a7ac28\""
	Apr 12 20:32:01 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:32:01.904431531Z" level=info msg="StartContainer for \"dc62651792ada70694213903668b5df4125fb342320e2592a636c32131a7ac28\""
	Apr 12 20:32:02 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:32:02.084098835Z" level=info msg="StartContainer for \"dc62651792ada70694213903668b5df4125fb342320e2592a636c32131a7ac28\" returns successfully"
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:34:42.321465348Z" level=info msg="shim disconnected" id=dc62651792ada70694213903668b5df4125fb342320e2592a636c32131a7ac28
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:34:42.321531296Z" level=warning msg="cleaning up after shim disconnected" id=dc62651792ada70694213903668b5df4125fb342320e2592a636c32131a7ac28 namespace=k8s.io
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:34:42.321548302Z" level=info msg="cleaning up dead shim"
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:34:42.332899364Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6222\n"
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:34:42.815291544Z" level=info msg="RemoveContainer for \"f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27\""
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:34:42.822239879Z" level=info msg="RemoveContainer for \"f111a11db8a640eb4037c37e248ac22012718edc2794055e2927ac5cccb55b27\" returns successfully"
	Apr 12 20:35:32 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:35:32.889829636Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Apr 12 20:35:32 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:35:32.905577372Z" level=info msg="CreateContainer within sandbox \"cfff760ba8d171278faf2170efc42a44df63f593fc4c709edf1a213ee0634308\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"8185fef02cc15affeed85290cc2dab639be7caf813330b63ff6f9b64439faaa4\""
	Apr 12 20:35:32 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:35:32.906143782Z" level=info msg="StartContainer for \"8185fef02cc15affeed85290cc2dab639be7caf813330b63ff6f9b64439faaa4\""
	Apr 12 20:35:33 old-k8s-version-20220412200421-42006 containerd[345]: time="2022-04-12T20:35:33.083967992Z" level=info msg="StartContainer for \"8185fef02cc15affeed85290cc2dab639be7caf813330b63ff6f9b64439faaa4\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20220412200421-42006
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20220412200421-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=old-k8s-version-20220412200421-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_23_07_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:23:02 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:36:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:36:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:36:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:36:02 +0000   Tue, 12 Apr 2022 20:22:59 +0000   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20220412200421-42006
	Capacity:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  304695084Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32873828Ki
	 pods:               110
	System Info:
	 Machine ID:                 140a143b31184b58be947b52a01fff83
	 System UUID:                0b57e9d3-0bbc-4976-a928-dc02ca892e39
	 Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	 Kernel Version:             5.13.0-1023-gcp
	 OS Image:                   Ubuntu 20.04.4 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  containerd://1.5.10
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (6 in total)
	  Namespace                  Name                                                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                            ------------  ----------  ---------------  -------------  ---
	  kube-system                etcd-old-k8s-version-20220412200421-42006                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kindnet-r6mfw                                                   100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                kube-apiserver-old-k8s-version-20220412200421-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-20220412200421-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                kube-proxy-ch8rr                                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-20220412200421-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                650m (8%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                              Message
	  ----    ------                   ----               ----                                              -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-20220412200421-42006     Node old-k8s-version-20220412200421-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-20220412200421-42006  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [c74bd61d489ea294c4524038815c535a5b71892b9f14baf6fda9b9aa6beb3722] <==
	* 2022-04-12 20:22:58.622312 I | raft: 8688e899f7831fc7 became follower at term 0
	2022-04-12 20:22:58.622318 I | raft: newRaft 8688e899f7831fc7 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2022-04-12 20:22:58.622321 I | raft: 8688e899f7831fc7 became follower at term 1
	2022-04-12 20:22:58.681535 W | auth: simple token is not cryptographically signed
	2022-04-12 20:22:58.685815 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2022-04-12 20:22:58.686083 I | etcdserver: 8688e899f7831fc7 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2022-04-12 20:22:58.686636 I | etcdserver/membership: added member 8688e899f7831fc7 [https://192.168.67.2:2380] to cluster 9d8fdeb88b6def78
	2022-04-12 20:22:58.688641 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2022-04-12 20:22:58.688786 I | embed: listening for metrics on http://192.168.67.2:2381
	2022-04-12 20:22:58.689067 I | embed: listening for metrics on http://127.0.0.1:2381
	2022-04-12 20:22:59.322750 I | raft: 8688e899f7831fc7 is starting a new election at term 1
	2022-04-12 20:22:59.322790 I | raft: 8688e899f7831fc7 became candidate at term 2
	2022-04-12 20:22:59.322807 I | raft: 8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2
	2022-04-12 20:22:59.322821 I | raft: 8688e899f7831fc7 became leader at term 2
	2022-04-12 20:22:59.322828 I | raft: raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2
	2022-04-12 20:22:59.323129 I | etcdserver: setting up the initial cluster version to 3.3
	2022-04-12 20:22:59.324060 N | etcdserver/membership: set the initial cluster version to 3.3
	2022-04-12 20:22:59.324130 I | etcdserver: published {Name:old-k8s-version-20220412200421-42006 ClientURLs:[https://192.168.67.2:2379]} to cluster 9d8fdeb88b6def78
	2022-04-12 20:22:59.324144 I | etcdserver/api: enabled capabilities for version 3.3
	2022-04-12 20:22:59.324159 I | embed: ready to serve client requests
	2022-04-12 20:22:59.324240 I | embed: ready to serve client requests
	2022-04-12 20:22:59.327070 I | embed: serving client requests on 127.0.0.1:2379
	2022-04-12 20:22:59.329081 I | embed: serving client requests on 192.168.67.2:2379
	2022-04-12 20:32:59.341297 I | mvcc: store.index: compact 564
	2022-04-12 20:32:59.342184 I | mvcc: finished scheduled compaction at 564 (took 539.343µs)
	
	* 
	* ==> kernel <==
	*  20:36:26 up  3:18,  0 users,  load average: 0.29, 0.46, 0.82
	Linux old-k8s-version-20220412200421-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [43048450227de67e9e1809cef2a38841367c12dd11d318da59981f0b718e3d27] <==
	* I0412 20:29:03.392710       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:29:03.392780       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:29:03.392811       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:29:03.392820       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0412 20:31:03.393135       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:31:03.393234       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:31:03.393321       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:31:03.393339       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0412 20:33:03.394856       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:33:03.394954       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:33:03.395022       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:33:03.395037       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0412 20:34:03.395294       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:34:03.395423       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:34:03.395493       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:34:03.395511       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0412 20:36:03.395767       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0412 20:36:03.395876       1 handler_proxy.go:99] no RequestInfo found in the context
	E0412 20:36:03.395964       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:36:03.395983       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [899651f5f598cfc5b9f581e04ee299c2209d93af0488aba1e94a3bc26897c31c] <==
	* E0412 20:29:55.483761       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:30:18.337367       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:30:25.735732       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:30:50.339085       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:30:55.987390       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:31:22.340940       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:31:26.239052       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:31:54.342721       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:31:56.490669       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:32:26.344434       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:32:26.742291       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0412 20:32:56.993973       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:32:58.346202       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:33:27.247109       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:33:30.348104       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:33:57.498614       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:34:02.349718       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:34:27.750294       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:34:34.351488       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:34:58.002029       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:35:06.353191       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:35:28.253562       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:35:38.354961       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:35:58.505197       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:36:10.356815       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [35cfeab0e4e1d9964e124ff25be39891b8083f742e581e3929c3b8722b2f97fa] <==
	* W0412 20:23:22.585721       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0412 20:23:22.594854       1 node.go:135] Successfully retrieved node IP: 192.168.67.2
	I0412 20:23:22.594901       1 server_others.go:149] Using iptables Proxier.
	I0412 20:23:22.595265       1 server.go:529] Version: v1.16.0
	I0412 20:23:22.595938       1 config.go:313] Starting service config controller
	I0412 20:23:22.595977       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0412 20:23:22.596105       1 config.go:131] Starting endpoints config controller
	I0412 20:23:22.596136       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0412 20:23:22.696164       1 shared_informer.go:204] Caches are synced for service config 
	I0412 20:23:22.696339       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [eace48121b7e9cb97a533cf95c603ce5868bf94d79d9ae87d2256ed29a48a90e] <==
	* E0412 20:23:02.581496       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:23:02.581570       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:02.581962       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:02.582000       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:23:02.583995       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:23:02.584131       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:23:02.584876       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:23:02.584965       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:23:02.585029       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:02.585381       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:23:02.587877       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:03.583153       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:23:03.584523       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:03.585485       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:03.586519       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:23:03.587580       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:23:03.589455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:23:03.590444       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:23:03.591706       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:23:03.592824       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:03.594549       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0412 20:23:03.595674       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:21.989917       1 factory.go:585] pod is already present in the activeQ
	E0412 20:23:24.891555       1 factory.go:585] pod is already present in the activeQ
	E0412 20:23:25.227638       1 factory.go:585] pod is already present in the activeQ
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:17:30 UTC, end at Tue 2022-04-12 20:36:27 UTC. --
	Apr 12 20:34:42 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:34:42.815382    2956 pod_workers.go:191] Error syncing pod eea12494-5d62-4fc1-a11b-fc3c48b53e19 ("kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"
	Apr 12 20:34:43 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:34:43.101891    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:34:48 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:34:48.102698    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:34:52 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:34:52.887507    2956 pod_workers.go:191] Error syncing pod eea12494-5d62-4fc1-a11b-fc3c48b53e19 ("kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"
	Apr 12 20:34:53 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:34:53.103562    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:34:58 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:34:58.104456    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:03 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:03.105291    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:05 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:05.887737    2956 pod_workers.go:191] Error syncing pod eea12494-5d62-4fc1-a11b-fc3c48b53e19 ("kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"
	Apr 12 20:35:08 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:08.106055    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:13 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:13.106864    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:18 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:18.107713    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:18 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:18.887427    2956 pod_workers.go:191] Error syncing pod eea12494-5d62-4fc1-a11b-fc3c48b53e19 ("kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"), skipping: failed to "StartContainer" for "kindnet-cni" with CrashLoopBackOff: "back-off 40s restarting failed container=kindnet-cni pod=kindnet-r6mfw_kube-system(eea12494-5d62-4fc1-a11b-fc3c48b53e19)"
	Apr 12 20:35:23 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:23.108625    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:28 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:28.109403    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:33 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:33.110167    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:38 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:38.110987    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:43 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:43.111823    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:48 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:48.112617    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:53 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:53.113419    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:35:58 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:35:58.114354    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:36:03 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:36:03.115211    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:36:08 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:36:08.115967    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:36:13 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:36:13.116825    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:36:18 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:36:18.117574    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Apr 12 20:36:23 old-k8s-version-20220412200421-42006 kubelet[2956]: E0412 20:36:23.118335    2956 kubelet.go:2187] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl
helpers_test.go:272: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl: exit status 1 (71.407489ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-5644d7b6d9-jhvqs" not found
	Error from server (NotFound): pods "metrics-server-6f89b5864b-g7z8d" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6b84985989-g99k4" not found
	Error from server (NotFound): pods "kubernetes-dashboard-6fb5469cf5-k6tnl" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context old-k8s-version-20220412200421-42006 describe pod coredns-5644d7b6d9-jhvqs metrics-server-6f89b5864b-g7z8d storage-provisioner dashboard-metrics-scraper-6b84985989-g99k4 kubernetes-dashboard-6fb5469cf5-k6tnl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-4f5z8" [bf8c5c1b-b291-4b00-8b87-04387449a94d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
E0412 20:27:58.177819   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:28:02.669990   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:28:14.515173   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:28:31.519678   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:29:21.424805   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:30:31.558555   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:30:54.808107   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 20:30:58.260157   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
E0412 20:31:07.734272   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:32:10.366473   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:32:30.779146   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory
E0412 20:32:58.178160   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:33:02.669599   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:33:14.515334   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:33:31.519770   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:33:34.603075   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:34:01.307712   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:35:54.807537   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:36:07.733945   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/no-preload-20220412200453-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:258: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
start_stop_delete_test.go:258: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-04-12 20:36:30.08723819 +0000 UTC m=+4569.063536607
start_stop_delete_test.go:258: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe po kubernetes-dashboard-8469778f77-4f5z8 -n kubernetes-dashboard
start_stop_delete_test.go:258: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 describe po kubernetes-dashboard-8469778f77-4f5z8 -n kubernetes-dashboard: context deadline exceeded (2.249µs)
start_stop_delete_test.go:258: kubectl --context embed-certs-20220412200510-42006 describe po kubernetes-dashboard-8469778f77-4f5z8 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:258: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 logs kubernetes-dashboard-8469778f77-4f5z8 -n kubernetes-dashboard
start_stop_delete_test.go:258: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 logs kubernetes-dashboard-8469778f77-4f5z8 -n kubernetes-dashboard: context deadline exceeded (200ns)
start_stop_delete_test.go:258: kubectl --context embed-certs-20220412200510-42006 logs kubernetes-dashboard-8469778f77-4f5z8 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:259: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-20220412200510-42006
helpers_test.go:235: (dbg) docker inspect embed-certs-20220412200510-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7",
	        "Created": "2022-04-12T20:05:23.305199436Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 293455,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:18:26.612502116Z",
	            "FinishedAt": "2022-04-12T20:18:25.329747162Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/hosts",
	        "LogPath": "/var/lib/docker/containers/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7/340eb3625ebd62fb359cd33fcc6dcfaf998d12a5a7abf9d2b97ffe2759fd47b7-json.log",
	        "Name": "/embed-certs-20220412200510-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-20220412200510-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-20220412200510-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dadeb2eddd4e44191a9cbc0ea441c3b044c125e01ecdef76eaf6f1e678a0465d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-20220412200510-42006",
	                "Source": "/var/lib/docker/volumes/embed-certs-20220412200510-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-20220412200510-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "name.minikube.sigs.k8s.io": "embed-certs-20220412200510-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63951c837bc4cbec77dc92e6cf6cbd1c5d6291277afb0821214e3e674d933846",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49428"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49429"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/63951c837bc4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-20220412200510-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "340eb3625ebd",
	                        "embed-certs-20220412200510-42006"
	                    ],
	                    "NetworkID": "4ace6a0fae231d855dc7c20348778126fda239556e97939a30b4df667ae930f8",
	                    "EndpointID": "d9bb1d4d461f8a5e6941f56ff72265d47d90204c1351eac2c95e6da67e66c2af",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-20220412200510-42006 logs -n 25: (1.073430537s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| pause   | -p                                                | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:43 UTC | Tue, 12 Apr 2022 20:14:44 UTC |
	|         | newest-cni-20220412201253-42006                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                               |                               |
	| unpause | -p                                                | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:45 UTC | Tue, 12 Apr 2022 20:14:45 UTC |
	|         | newest-cni-20220412201253-42006                   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=1                            |                                                 |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                   |                                                 |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                   |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:28 UTC | Tue, 12 Apr 2022 20:25:29 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:29 UTC | Tue, 12 Apr 2022 20:25:30 UTC |
	|         | default-k8s-different-port-20220412201228-42006   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:30 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:40 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:23 UTC | Tue, 12 Apr 2022 20:27:24 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:28 UTC | Tue, 12 Apr 2022 20:27:28 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:34:43 UTC | Tue, 12 Apr 2022 20:34:44 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:36:26 UTC | Tue, 12 Apr 2022 20:36:27 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:36:27 UTC | Tue, 12 Apr 2022 20:36:30 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:25:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:22.030056  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:24.529656  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:21.176971  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:23.676778  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:25.677210  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:27.029850  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.532457  293188 node_ready.go:38] duration metric: took 4m0.016261704s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:27:27.535074  293188 out.go:176] 
	W0412 20:27:27.535184  293188 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:27.535195  293188 out.go:241] * 
	W0412 20:27:27.535868  293188 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:28.176545  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:30.177022  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:32.677020  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:35.177243  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:37.677194  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:40.176627  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:42.177209  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:44.677318  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:46.677818  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:49.176630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:51.676722  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:54.176912  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:56.177137  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:58.677009  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:01.177266  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:03.676844  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:06.176674  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:08.177076  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:10.177207  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:12.676641  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:15.176557  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:17.677002  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:19.677697  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:22.176483  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:24.676630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:26.677667  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:29.177357  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:31.677367  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:34.176852  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:36.177402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:38.677164  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:41.177066  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:43.676983  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:46.177366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:48.677127  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:50.677295  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:53.177230  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:55.677228  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:58.176672  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:00.176822  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:02.676739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:04.677056  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:06.677123  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:09.176984  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:11.677277  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:14.176562  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:16.176807  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:18.677182  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:21.177384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:23.677402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:26.176749  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:28.176804  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:30.177721  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:32.676621  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:34.677246  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:36.677802  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:39.176692  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:41.676441  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:43.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:45.677234  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:48.177008  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:50.677510  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:53.177088  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:55.677043  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:58.176812  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:00.177215  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:02.676366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:04.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:06.676719  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:08.677078  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:11.176385  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.176787  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.673973  302775 pod_ready.go:81] duration metric: took 4m0.003097375s waiting for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	E0412 20:30:13.674004  302775 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:30:13.674026  302775 pod_ready.go:38] duration metric: took 4m0.008261536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:30:13.674088  302775 kubeadm.go:605] restartCluster took 4m15.671526358s
	W0412 20:30:13.674261  302775 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:30:13.674296  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:30:15.434543  302775 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.760223538s)
	I0412 20:30:15.434648  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:15.444487  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:30:15.452033  302775 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:30:15.452119  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:30:15.459066  302775 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:30:15.459111  302775 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:30:28.943093  302775 out.go:203]   - Generating certificates and keys ...
	I0412 20:30:28.946723  302775 out.go:203]   - Booting up control plane ...
	I0412 20:30:28.949531  302775 out.go:203]   - Configuring RBAC rules ...
	I0412 20:30:28.951251  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:30:28.951270  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:30:28.954437  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:30:28.954502  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:30:28.958449  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:30:28.958473  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:30:28.972610  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:30:29.581068  302775 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:30:29.581147  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006 minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.581148  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.588127  302775 ops.go:34] apiserver oom_adj: -16
	I0412 20:30:29.648666  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.229416  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.729281  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.229706  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.729052  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.228891  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.729287  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.228878  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.729605  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.729516  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.229278  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.729029  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.228984  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.729282  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.229296  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.729119  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.729302  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.229163  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.728992  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.229522  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.729277  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.228750  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.729285  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.228910  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.729297  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.795666  302775 kubeadm.go:1020] duration metric: took 13.214575797s to wait for elevateKubeSystemPrivileges.
	I0412 20:30:42.795702  302775 kubeadm.go:393] StartCluster complete in 4m44.840593181s
	I0412 20:30:42.795726  302775 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:42.795894  302775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:30:42.797959  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:43.316096  302775 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220412201228-42006" rescaled to 1
	I0412 20:30:43.316236  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:30:43.316267  302775 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:30:43.316330  302775 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316365  302775 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316387  302775 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316392  302775 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316399  302775 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316231  302775 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:30:43.318925  302775 out.go:176] * Verifying Kubernetes components...
	I0412 20:30:43.316370  302775 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.319000  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:43.319019  302775 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316478  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:30:43.316392  302775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.316403  302775 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:30:43.319204  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.316409  302775 addons.go:165] addon metrics-server should already be in state true
	I0412 20:30:43.319309  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.319076  302775 addons.go:165] addon dashboard should already be in state true
	I0412 20:30:43.319411  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.319521  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319712  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319812  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319884  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.368004  302775 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369733  302775 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:30:43.368143  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:30:43.369830  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:30:43.371713  302775 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369909  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.371811  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:30:43.371829  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:30:43.371894  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.373558  302775 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:30:43.373752  302775 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.373772  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:30:43.373846  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.384370  302775 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.384406  302775 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:30:43.384440  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.384946  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.415524  302775 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:30:43.415635  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:30:43.419849  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.421835  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.422931  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.441543  302775 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.441567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:30:43.441611  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.477201  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.584023  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.594296  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:30:43.594323  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:30:43.594540  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:30:43.594567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:30:43.597433  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.611081  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:30:43.611109  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:30:43.612709  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:30:43.612735  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:30:43.695590  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:30:43.695620  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:30:43.695871  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.695896  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:30:43.713161  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.783491  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:30:43.783522  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:30:43.786723  302775 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 20:30:43.804035  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:30:43.804161  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:30:43.880364  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:30:43.880416  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:30:43.898688  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:30:43.898715  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:30:43.979407  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:30:43.979444  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:30:44.000255  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.000283  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:30:44.102994  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.494063  302775 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:44.918251  302775 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:30:44.918280  302775 addons.go:417] enableAddons completed in 1.602020138s
	I0412 20:30:45.423200  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:47.923285  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:50.422835  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:52.923459  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:55.422462  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:57.923268  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:00.422559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:02.422789  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:04.422907  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:06.923381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:09.422313  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:11.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:13.922722  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:16.423078  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:18.423314  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:20.923142  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:22.923173  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:24.923329  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:27.423082  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:29.922381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:31.922796  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:33.923653  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:36.422332  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:38.423001  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:40.922454  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:42.923084  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:45.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:47.922302  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:49.924482  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:52.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:54.922902  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:56.923448  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:59.422807  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:01.422968  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:03.923510  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:06.422160  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:08.423365  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:10.922571  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:12.922895  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:14.923501  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:17.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:19.922939  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:22.421806  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:24.422759  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:26.423058  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:28.922712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:30.922856  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:33.422864  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:35.923228  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:38.423092  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:40.922749  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:42.923323  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:45.422441  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:47.423052  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:49.922914  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:51.923513  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:54.422949  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:56.423035  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:58.923416  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:01.422712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:03.422921  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:05.923038  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:08.422910  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:10.923412  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:13.423048  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:15.922494  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:17.923130  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:19.923551  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:22.422029  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:24.422643  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:26.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:28.923212  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:31.422303  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:33.423218  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:35.923095  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:38.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:40.423119  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:42.924176  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:45.422942  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:47.923152  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:50.422822  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:52.923237  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:55.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:57.923053  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:59.923203  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:01.923370  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:04.422633  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:06.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:09.422887  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:11.423344  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:13.922945  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:16.423257  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:18.922588  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:20.923031  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:23.423271  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:25.423373  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:27.922498  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:29.922791  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:31.922929  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:34.423381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:36.923060  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:38.923113  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:41.422479  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.422840  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.425257  302775 node_ready.go:38] duration metric: took 4m0.009696502s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:34:43.428510  302775 out.go:176] 
	W0412 20:34:43.428724  302775 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:34:43.428749  302775 out.go:241] * 
	W0412 20:34:43.429581  302775 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	458131e81aa96       6de166512aa22       53 seconds ago      Running             kindnet-cni               4                   87d378e0d4e49
	81eea60d1ff23       6de166512aa22       4 minutes ago       Exited              kindnet-cni               3                   87d378e0d4e49
	85e45673df218       3c53fa8541f95       13 minutes ago      Running             kube-proxy                0                   a78b57801a708
	93ef4fab7f5ad       884d49d6d8c9f       13 minutes ago      Running             kube-scheduler            2                   5bc8e7efde0b6
	a6631d59aa0ff       3fc1d62d65872       13 minutes ago      Running             kube-apiserver            2                   501d4f4e3dfa1
	faccb325c093f       b0c9e5e4dbb14       13 minutes ago      Running             kube-controller-manager   2                   b74d72be2b4d2
	d8ee5605c1944       25f8c7f3da61c       13 minutes ago      Running             etcd                      2                   7f38bf6138d38
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:18:26 UTC, end at Tue 2022-04-12 20:36:31 UTC. --
	Apr 12 20:28:49 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:28:49.737499537Z" level=info msg="RemoveContainer for \"35dd0377876c545c5dc4bbb6888b37789a2b801f8fc151e52e479a9af0b95295\" returns successfully"
	Apr 12 20:29:04 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:29:04.106406304Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Apr 12 20:29:04 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:29:04.119901345Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389\""
	Apr 12 20:29:04 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:29:04.120411319Z" level=info msg="StartContainer for \"911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389\""
	Apr 12 20:29:04 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:29:04.283808955Z" level=info msg="StartContainer for \"911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389\" returns successfully"
	Apr 12 20:31:44 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:31:44.522970399Z" level=info msg="shim disconnected" id=911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389
	Apr 12 20:31:44 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:31:44.523045509Z" level=warning msg="cleaning up after shim disconnected" id=911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389 namespace=k8s.io
	Apr 12 20:31:44 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:31:44.523063275Z" level=info msg="cleaning up dead shim"
	Apr 12 20:31:44 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:31:44.534082258Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:31:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4280\n"
	Apr 12 20:31:45 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:31:45.034504917Z" level=info msg="RemoveContainer for \"b81c34430cb1e01a16c1e8ce15da130c957aa8e09978f3d5d28604fa71d3179a\""
	Apr 12 20:31:45 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:31:45.040897006Z" level=info msg="RemoveContainer for \"b81c34430cb1e01a16c1e8ce15da130c957aa8e09978f3d5d28604fa71d3179a\" returns successfully"
	Apr 12 20:32:09 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:32:09.105537907Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:3,}"
	Apr 12 20:32:09 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:32:09.120107240Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for &ContainerMetadata{Name:kindnet-cni,Attempt:3,} returns container id \"81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619\""
	Apr 12 20:32:09 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:32:09.120693086Z" level=info msg="StartContainer for \"81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619\""
	Apr 12 20:32:09 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:32:09.284472596Z" level=info msg="StartContainer for \"81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619\" returns successfully"
	Apr 12 20:34:49 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:34:49.538513941Z" level=info msg="shim disconnected" id=81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619
	Apr 12 20:34:49 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:34:49.538583125Z" level=warning msg="cleaning up after shim disconnected" id=81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619 namespace=k8s.io
	Apr 12 20:34:49 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:34:49.538600543Z" level=info msg="cleaning up dead shim"
	Apr 12 20:34:49 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:34:49.549651196Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:34:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4387\n"
	Apr 12 20:34:50 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:34:50.343597695Z" level=info msg="RemoveContainer for \"911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389\""
	Apr 12 20:34:50 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:34:50.348470930Z" level=info msg="RemoveContainer for \"911d1953ae0e4330b22808b7aa1584f05cdf30d48dd10ade6c6a831cc3036389\" returns successfully"
	Apr 12 20:35:38 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:35:38.105855619Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:4,}"
	Apr 12 20:35:38 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:35:38.121568358Z" level=info msg="CreateContainer within sandbox \"87d378e0d4e49f2737411e5f59f4d3e7d7b3dd770002c06a77f266aa1546d873\" for &ContainerMetadata{Name:kindnet-cni,Attempt:4,} returns container id \"458131e81aa966d3bc386e5f9876e78048946beeb4f7f89ef912eb7073b1a7cc\""
	Apr 12 20:35:38 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:35:38.122145802Z" level=info msg="StartContainer for \"458131e81aa966d3bc386e5f9876e78048946beeb4f7f89ef912eb7073b1a7cc\""
	Apr 12 20:35:38 embed-certs-20220412200510-42006 containerd[345]: time="2022-04-12T20:35:38.285524167Z" level=info msg="StartContainer for \"458131e81aa966d3bc386e5f9876e78048946beeb4f7f89ef912eb7073b1a7cc\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-20220412200510-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-20220412200510-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=embed-certs-20220412200510-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_23_13_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:23:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-20220412200510-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:36:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:33:41 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:33:41 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:33:41 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:33:41 +0000   Tue, 12 Apr 2022 20:23:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    embed-certs-20220412200510-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ce1f241f-9ecd-4653-8279-4a97e0fb4c59
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-embed-certs-20220412200510-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-n99zz                                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-embed-certs-20220412200510-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-20220412200510-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-zbssv                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-20220412200510-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 13m   kube-proxy  
	  Normal  Starting                 13m   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet     Node embed-certs-20220412200510-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [d8ee5605c19440f40fed34fa4f74ca552e24853fb32511064fb115ff3859b1e3] <==
	* {"level":"info","ts":"2022-04-12T20:23:07.414Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:23:07.414Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:23:07.414Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-04-12T20:23:07.415Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:23:07.415Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:embed-certs-20220412200510-42006 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:23:08.001Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.002Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:23:08.003Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:23:08.004Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-04-12T20:33:08.247Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":644}
	{"level":"info","ts":"2022-04-12T20:33:08.248Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":644,"took":"592.885µs"}
	
	* 
	* ==> kernel <==
	*  20:36:31 up  3:19,  0 users,  load average: 0.51, 0.50, 0.83
	Linux embed-certs-20220412200510-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [a6631d59aa0ffe547791d11102163b9ea508acf27460aef1bb5f74efb2bc37f7] <==
	* I0412 20:26:29.513791       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:28:11.234617       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:28:11.234695       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:28:11.234707       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:29:11.235750       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:29:11.235840       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:29:11.235850       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:31:11.236610       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:31:11.236698       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:31:11.236709       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:33:11.241140       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:33:11.241216       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:33:11.241237       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:34:11.241612       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:34:11.241692       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:34:11.241703       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:36:11.242801       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:36:11.242893       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:36:11.242901       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [faccb325c093f09235bfc3b79a01d41253d94e3f9500d21aca905d6adf7de317] <==
	* W0412 20:30:26.814261       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:30:56.414556       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:30:56.829171       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:31:26.426305       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:31:26.844428       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:31:56.437894       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:31:56.857953       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:32:26.452834       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:32:26.872612       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:32:56.463889       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:32:56.887332       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:33:26.473914       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:33:26.901882       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:33:56.485108       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:33:56.917417       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:34:26.499917       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:34:26.933798       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:34:56.509713       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:34:56.949893       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:35:26.520046       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:35:26.966332       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:35:56.540198       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:35:56.982390       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:36:26.553792       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:36:26.999615       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [85e45673df2189633dbfdab223666611b928677dbfa8af98b4a47fddf703bf69] <==
	* I0412 20:23:27.293347       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0412 20:23:27.293427       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I0412 20:23:27.293491       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:23:27.316695       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:23:27.316725       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:23:27.316732       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:23:27.316753       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:23:27.317207       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:23:27.317856       1 config.go:317] "Starting service config controller"
	I0412 20:23:27.317897       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:23:27.317904       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:23:27.317932       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:23:27.418588       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:23:27.418634       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [93ef4fab7f5ad9f2bafb4768753b511c286cd3b76fc9289ff8377907b9dc61e6] <==
	* W0412 20:23:10.295539       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:10.295566       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:23:10.295864       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 20:23:10.295876       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:10.295891       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:10.295895       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:23:11.134738       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:23:11.134794       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:23:11.227746       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:11.227796       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:23:11.271118       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:23:11.271159       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 20:23:11.291662       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:23:11.291701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:23:11.325053       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:23:11.325097       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:23:11.356371       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0412 20:23:11.356478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0412 20:23:11.404988       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:11.405037       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:23:11.444356       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0412 20:23:11.444387       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0412 20:23:11.456618       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:23:11.456653       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0412 20:23:11.689891       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:18:26 UTC, end at Tue 2022-04-12 20:36:31 UTC. --
	Apr 12 20:35:02 embed-certs-20220412200510-42006 kubelet[2910]: I0412 20:35:02.104179    2910 scope.go:110] "RemoveContainer" containerID="81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619"
	Apr 12 20:35:02 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:02.104529    2910 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n99zz_kube-system(a4a4a20b-4580-4435-bb88-e5f800055b3c)\"" pod="kube-system/kindnet-n99zz" podUID=a4a4a20b-4580-4435-bb88-e5f800055b3c
	Apr 12 20:35:03 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:03.456961    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:08 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:08.457856    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:13 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:13.458903    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:15 embed-certs-20220412200510-42006 kubelet[2910]: I0412 20:35:15.104030    2910 scope.go:110] "RemoveContainer" containerID="81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619"
	Apr 12 20:35:15 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:15.104341    2910 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n99zz_kube-system(a4a4a20b-4580-4435-bb88-e5f800055b3c)\"" pod="kube-system/kindnet-n99zz" podUID=a4a4a20b-4580-4435-bb88-e5f800055b3c
	Apr 12 20:35:18 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:18.460474    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:23 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:23.461799    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:26 embed-certs-20220412200510-42006 kubelet[2910]: I0412 20:35:26.103841    2910 scope.go:110] "RemoveContainer" containerID="81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619"
	Apr 12 20:35:26 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:26.104282    2910 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-n99zz_kube-system(a4a4a20b-4580-4435-bb88-e5f800055b3c)\"" pod="kube-system/kindnet-n99zz" podUID=a4a4a20b-4580-4435-bb88-e5f800055b3c
	Apr 12 20:35:28 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:28.462813    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:33 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:33.464513    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:38 embed-certs-20220412200510-42006 kubelet[2910]: I0412 20:35:38.103251    2910 scope.go:110] "RemoveContainer" containerID="81eea60d1ff23b15bfffc928b1640e483b9bb11e2b4932bcb4dd3e100ef73619"
	Apr 12 20:35:38 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:38.465466    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:43 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:43.466370    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:48 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:48.467807    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:53 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:53.469196    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:35:58 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:35:58.470222    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:36:03 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:36:03.471404    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:36:08 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:36:08.472499    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:36:13 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:36:13.473509    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:36:18 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:36:18.474661    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:36:23 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:36:23.475905    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:36:28 embed-certs-20220412200510-42006 kubelet[2910]: E0412 20:36:28.477137    2910 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8
helpers_test.go:272: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8: exit status 1 (71.635212ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-74r7x" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-vmhkr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-dhvbk" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-4f5z8" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context embed-certs-20220412200510-42006 describe pod coredns-64897985d-74r7x metrics-server-b955d9d8-vmhkr storage-provisioner dashboard-metrics-scraper-56974995fc-dhvbk kubernetes-dashboard-8469778f77-4f5z8: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-xs557" [f14b74ca-56c9-4b73-96c4-4e5a79bb9c53] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:42:58.177954   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:43:02.670042   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:43:14.515094   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
E0412 20:43:31.519509   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
helpers_test.go:327: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: context deadline exceeded
start_stop_delete_test.go:258: ***** TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:258: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
start_stop_delete_test.go:258: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2022-04-12 20:43:46.071619473 +0000 UTC m=+5005.047917881
start_stop_delete_test.go:258: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe po kubernetes-dashboard-8469778f77-xs557 -n kubernetes-dashboard
start_stop_delete_test.go:258: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 describe po kubernetes-dashboard-8469778f77-xs557 -n kubernetes-dashboard: context deadline exceeded (1.914µs)
start_stop_delete_test.go:258: kubectl --context default-k8s-different-port-20220412201228-42006 describe po kubernetes-dashboard-8469778f77-xs557 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:258: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 logs kubernetes-dashboard-8469778f77-xs557 -n kubernetes-dashboard
start_stop_delete_test.go:258: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 logs kubernetes-dashboard-8469778f77-xs557 -n kubernetes-dashboard: context deadline exceeded (198ns)
start_stop_delete_test.go:258: kubectl --context default-k8s-different-port-20220412201228-42006 logs kubernetes-dashboard-8469778f77-xs557 -n kubernetes-dashboard: context deadline exceeded
start_stop_delete_test.go:259: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-different-port-20220412201228-42006
helpers_test.go:235: (dbg) docker inspect default-k8s-different-port-20220412201228-42006:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f",
	        "Created": "2022-04-12T20:12:37.404174744Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303040,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-04-12T20:25:41.726729323Z",
	            "FinishedAt": "2022-04-12T20:25:40.439971944Z"
	        },
	        "Image": "sha256:44d43b69f3d5ba7f801dca891b535f23f9839671e82277938ec7dc42a22c50d6",
	        "ResolvConfPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hostname",
	        "HostsPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/hosts",
	        "LogPath": "/var/lib/docker/containers/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f/6642b489f96391820ba70b96c7534c3a76d670c12f14b131c414488b6433932f-json.log",
	        "Name": "/default-k8s-different-port-20220412201228-42006",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-different-port-20220412201228-42006:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-different-port-20220412201228-42006",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5-init/diff:/var/lib/docker/overlay2/a46d95d024de4bf9705eb193a92586bdab1878cd991975232b71b00099a9dcbd/diff:/var/lib/docker/overlay2/ea82ee4a684697cc3575193cd81b57372b927c9bf8e744fce634f9abd0ce56f9/diff:/var/lib/docker/overlay2/78746ad8dd0d6497f442bd186c99cfd280a7ed0ff07c9d33d217c0f00c8c4565/diff:/var/lib/docker/overlay2/a402f380eceb56655ea5f1e6ca4a61a01ae014a5df04f1a7d02d8f57ff3e6c84/diff:/var/lib/docker/overlay2/b27a231791a4d14a662f9e6e34fdd213411e56cc17149199657aa480018b3c72/diff:/var/lib/docker/overlay2/0a44e7fc2c8d5589d496b9d0585d39e8e142f48342ff9669a35c370bd0298e42/diff:/var/lib/docker/overlay2/6ca98e52ca7d4cc60d14bd2db9969dd3356e0e0ce3acd5bfb5734e6e59f52c7e/diff:/var/lib/docker/overlay2/9957a7c00c30c9d801326093ddf20994a7ee1daaa54bc4dac5c2dd6d8711bd7e/diff:/var/lib/docker/overlay2/f7a1aafecf6ee716c484b5eecbbf236a53607c253fe283c289707fad85495a88/diff:/var/lib/docker/overlay2/fe8cd1
26522650fedfc827751e0b74da9a882ff48de51bc9dee6428ee3bc1122/diff:/var/lib/docker/overlay2/5b4cc7e4a78288063ad39231ca158608aa28e9dec6015d4e186e4c4d6888017f/diff:/var/lib/docker/overlay2/2a754ceb6abee0f92c99667fae50c7899233e94595630e9caffbf73cda1ff741/diff:/var/lib/docker/overlay2/9e69139d9b2bc63ab678378e004018ece394ec37e8289ba5eb30901dda160da5/diff:/var/lib/docker/overlay2/3db8e6413b3a1f309b81d2e1a79c3d239c4e4568b31a6f4bf92511f477f3a61d/diff:/var/lib/docker/overlay2/5ab54e45d09e2d6da4f4228ebae3075b5974e1d847526c1011fc7368392ef0d2/diff:/var/lib/docker/overlay2/6daf6a3cf916347bbbb70ace4aab29dd0f272dc9e39d6b0bf14940470857f1d5/diff:/var/lib/docker/overlay2/b85d29df9ed74e769c82a956eb46ca4eaf51018e94270fee2f58a6f2d82c354c/diff:/var/lib/docker/overlay2/0804b9c30e0dcc68e15139106e47bca1969b010d520652c87ff1476f5da9b799/diff:/var/lib/docker/overlay2/2ef50ba91c77826aae2efca8daf7194c2d56fd8e745476a35413585cdab580a6/diff:/var/lib/docker/overlay2/6f5a272367c30d47254dedc8a42e6b2791c406c3b74fd6a8242d568e4ec362e3/diff:/var/lib/d
ocker/overlay2/e978bd5ca7463862ca1b51d0bf19f95d916464dc866f09f1ab4a5ae4c082c3a9/diff:/var/lib/docker/overlay2/0d60a5805e276ca3bff4824250eab1d2960e9d10d28282e07652204c07dc107f/diff:/var/lib/docker/overlay2/d00efa0bc999057fcf3efdeed81022cc8b9b9871919f11d7d9199a3d22fda41b/diff:/var/lib/docker/overlay2/44d3db5bf7925c4cc8ee60008ff23d799e12ea6586850d797b930fa796788861/diff:/var/lib/docker/overlay2/4af15c525b7ce96b7fd4117c156f53cf9099702641c2907909c12b7019563d44/diff:/var/lib/docker/overlay2/ae9ca4b8da4afb1303158a42ec2ac83dc057c0eaefcd69b7eeaa094ae24a39e7/diff:/var/lib/docker/overlay2/afb8ebd776ddcba17d1056f2350cd0b303c6664964644896a92e9c07252b5d95/diff:/var/lib/docker/overlay2/41b6235378ad54ccaec907f16811e7cd66bd777db63151293f4d8247a33af8f1/diff:/var/lib/docker/overlay2/e079465076581cb577a9d5c7d676cecb6495ddd73d9fc330e734203dd7e48607/diff:/var/lib/docker/overlay2/2d3a7c3e62a99d54d94c2562e13b904453442bda8208afe73cdbe1afdbdd0684/diff:/var/lib/docker/overlay2/b9e03b9cbc1c5a9bbdbb0c99ca5d7539c2fa81a37872c40e07377b52f19
50f4b/diff:/var/lib/docker/overlay2/fd0b72378869edec809e7ead1e4448ae67c73245e0e98d751c51253c80f12d56/diff:/var/lib/docker/overlay2/a34f5625ad35eb2eb1058204a5c23590d70d9aae62a3a0cf05f87501c388ccde/diff:/var/lib/docker/overlay2/6221ad5f4d7b133c35d96ab112cf2eb437196475a72ea0ec8952c058c6644381/diff:/var/lib/docker/overlay2/b33a322162ab62a47e5e731b35da4a989d8a79fcb67e1925b109eace6772370c/diff:/var/lib/docker/overlay2/b52fc81aca49f276f1c709fa139521063628f4042b9da5969a3487a57ee3226b/diff:/var/lib/docker/overlay2/5b4d11a181cad1ea657c7ea99d422b51c942ece21b8d24442b4e8806644e0e1c/diff:/var/lib/docker/overlay2/1620ce1d42f02f38d07f3ff0970e3df6940a3be20f3c7cd835f4f40f5cc2d010/diff:/var/lib/docker/overlay2/43f18c528700dc241024bb24f43a0d5192ecc9575f4b053582410f6265326434/diff:/var/lib/docker/overlay2/e59874999e485483e50da428a499e40c91890c33515857454d7a64bc04ca0c43/diff:/var/lib/docker/overlay2/a120ff1bbaa325cd87d2682d6751d3bf287b66d4bbe31bd1f9f6283d724491ac/diff:/var/lib/docker/overlay2/a6a6f3646fabc023283ff6349b9627be8332c4
bb740688f8fda12c98bd76b725/diff:/var/lib/docker/overlay2/3c2b110c4b3a8689b2792b2b73f99f06bd9858b494c2164e812208579b0223f2/diff:/var/lib/docker/overlay2/98e3881e2e4128283f8d66fafc082bc795e22eab77f135635d3249367b92ba5c/diff:/var/lib/docker/overlay2/ce937670cf64eff618c699bfd15e46c6d70c0184fef594182e5ec6df83b265bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c20441da854c76109edadd5c14467eeab1a532a78b987301c8ccc63f013fdb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-different-port-20220412201228-42006",
	                "Source": "/var/lib/docker/volumes/default-k8s-different-port-20220412201228-42006/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-different-port-20220412201228-42006",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "name.minikube.sigs.k8s.io": "default-k8s-different-port-20220412201228-42006",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2f31167bd0056875e8d61db40d68ea99f4fbde39279c09c9f9b944b997d42ff3",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49433"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49435"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49434"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2f31167bd005",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-different-port-20220412201228-42006": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6642b489f963",
	                        "default-k8s-different-port-20220412201228-42006"
	                    ],
	                    "NetworkID": "e1e5eb80641804e0cf03f9ee1959284f2ec05fd6c94f6b6eb19931fc6032414c",
	                    "EndpointID": "262480d183484a7442b9cbdbeef064e40a773ac2bbccc3622cac03a2bef59cce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:244: <<< TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-different-port-20220412201228-42006 logs -n 25: (1.053144704s)
helpers_test.go:252: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                     Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:46 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                   |                                                 |         |         |                               |                               |
	| delete  | -p                                                | newest-cni-20220412201253-42006                 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:14:49 UTC | Tue, 12 Apr 2022 20:14:49 UTC |
	|         | newest-cni-20220412201253-42006                   |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:18 UTC | Tue, 12 Apr 2022 20:17:19 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:20 UTC | Tue, 12 Apr 2022 20:17:21 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:22 UTC | Tue, 12 Apr 2022 20:17:22 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:24 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:23 UTC | Tue, 12 Apr 2022 20:17:28 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:17:29 UTC | Tue, 12 Apr 2022 20:17:29 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:10 UTC | Tue, 12 Apr 2022 20:18:11 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:13 UTC | Tue, 12 Apr 2022 20:18:13 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:14 UTC | Tue, 12 Apr 2022 20:18:14 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:15 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:18:25 UTC | Tue, 12 Apr 2022 20:18:25 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:26 UTC | Tue, 12 Apr 2022 20:25:27 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:28 UTC | Tue, 12 Apr 2022 20:25:29 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:29 UTC | Tue, 12 Apr 2022 20:25:30 UTC |
	|         | default-k8s-different-port-20220412201228-42006   |                                                 |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                 |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                 |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:30 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006   |                                                 |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                 |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:25:40 UTC | Tue, 12 Apr 2022 20:25:40 UTC |
	|         | default-k8s-different-port-20220412201228-42006   |                                                 |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:23 UTC | Tue, 12 Apr 2022 20:27:24 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:27:28 UTC | Tue, 12 Apr 2022 20:27:28 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | default-k8s-different-port-20220412201228-42006   | default-k8s-different-port-20220412201228-42006 | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:34:43 UTC | Tue, 12 Apr 2022 20:34:44 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| -p      | old-k8s-version-20220412200421-42006              | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:36:26 UTC | Tue, 12 Apr 2022 20:36:27 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| delete  | -p                                                | old-k8s-version-20220412200421-42006            | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:36:27 UTC | Tue, 12 Apr 2022 20:36:30 UTC |
	|         | old-k8s-version-20220412200421-42006              |                                                 |         |         |                               |                               |
	| -p      | embed-certs-20220412200510-42006                  | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:36:30 UTC | Tue, 12 Apr 2022 20:36:31 UTC |
	|         | logs -n 25                                        |                                                 |         |         |                               |                               |
	| delete  | -p                                                | embed-certs-20220412200510-42006                | jenkins | v1.25.2 | Tue, 12 Apr 2022 20:36:32 UTC | Tue, 12 Apr 2022 20:36:34 UTC |
	|         | embed-certs-20220412200510-42006                  |                                                 |         |         |                               |                               |
	|---------|---------------------------------------------------|-------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 20:25:40
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 20:25:40.977489  302775 out.go:297] Setting OutFile to fd 1 ...
	I0412 20:25:40.977641  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977651  302775 out.go:310] Setting ErrFile to fd 2...
	I0412 20:25:40.977656  302775 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 20:25:40.977775  302775 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 20:25:40.978024  302775 out.go:304] Setting JSON to false
	I0412 20:25:40.979319  302775 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":11294,"bootTime":1649783847,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 20:25:40.979397  302775 start.go:125] virtualization: kvm guest
	I0412 20:25:40.982252  302775 out.go:176] * [default-k8s-different-port-20220412201228-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 20:25:40.984292  302775 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 20:25:40.982508  302775 notify.go:193] Checking for updates...
	I0412 20:25:40.986069  302775 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 20:25:40.987699  302775 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:40.989177  302775 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 20:25:40.990958  302775 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 20:25:40.991481  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:40.992603  302775 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 20:25:41.036514  302775 docker.go:137] docker version: linux-20.10.14
	I0412 20:25:41.036604  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.138222  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.069111625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 20:25:41.138342  302775 docker.go:254] overlay module found
	I0412 20:25:41.140887  302775 out.go:176] * Using the docker driver based on existing profile
	I0412 20:25:41.140919  302775 start.go:284] selected driver: docker
	I0412 20:25:41.140926  302775 start.go:801] validating driver "docker" against &{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-
42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTim
eout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.141041  302775 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 20:25:41.141086  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.141109  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.142724  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.143315  302775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 20:25:41.241191  302775 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 20:25:41.17623516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	W0412 20:25:41.241354  302775 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 20:25:41.241406  302775 out.go:241] ! Your cgroup does not allow setting memory.
	I0412 20:25:41.243729  302775 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 20:25:41.243836  302775 start_flags.go:847] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0412 20:25:41.243861  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:41.243872  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:41.243889  302775 start_flags.go:306] config:
	{Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: Mu
ltiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:41.246889  302775 out.go:176] * Starting control plane node default-k8s-different-port-20220412201228-42006 in cluster default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.246928  302775 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 20:25:41.248537  302775 out.go:176] * Pulling base image ...
	I0412 20:25:41.248572  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:41.248612  302775 preload.go:148] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 20:25:41.248642  302775 cache.go:57] Caching tarball of preloaded images
	I0412 20:25:41.248665  302775 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 20:25:41.248918  302775 preload.go:174] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0412 20:25:41.248940  302775 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 20:25:41.249111  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.295232  302775 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 20:25:41.295265  302775 cache.go:141] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in daemon, skipping load
	I0412 20:25:41.295288  302775 cache.go:206] Successfully downloaded all kic artifacts
	I0412 20:25:41.295333  302775 start.go:352] acquiring machines lock for default-k8s-different-port-20220412201228-42006: {Name:mk673e2ef5ad74005354b6f8044ae48e370ea3c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0412 20:25:41.295441  302775 start.go:356] acquired machines lock for "default-k8s-different-port-20220412201228-42006" in 78.98µs
	I0412 20:25:41.295472  302775 start.go:94] Skipping create...Using existing machine configuration
	I0412 20:25:41.295481  302775 fix.go:55] fixHost starting: 
	I0412 20:25:41.295714  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.330052  302775 fix.go:103] recreateIfNeeded on default-k8s-different-port-20220412201228-42006: state=Stopped err=<nil>
	W0412 20:25:41.330099  302775 fix.go:129] unexpected machine state, will restart: <nil>
	I0412 20:25:39.404942  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.405860  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:43.905123  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:41.529434  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:44.030080  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:41.332812  302775 out.go:176] * Restarting existing docker container for "default-k8s-different-port-20220412201228-42006" ...
	I0412 20:25:41.332900  302775 cli_runner.go:164] Run: docker start default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.735198  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:25:41.771480  302775 kic.go:416] container "default-k8s-different-port-20220412201228-42006" state is running.
	I0412 20:25:41.771899  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.807070  302775 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/config.json ...
	I0412 20:25:41.807321  302775 machine.go:88] provisioning docker machine ...
	I0412 20:25:41.807352  302775 ubuntu.go:169] provisioning hostname "default-k8s-different-port-20220412201228-42006"
	I0412 20:25:41.807404  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:41.843643  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:41.843852  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:41.843870  302775 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20220412201228-42006 && echo "default-k8s-different-port-20220412201228-42006" | sudo tee /etc/hostname
	I0412 20:25:41.844512  302775 main.go:134] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60986->127.0.0.1:49437: read: connection reset by peer
	I0412 20:25:44.977976  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20220412201228-42006
	
	I0412 20:25:44.978060  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.012801  302775 main.go:134] libmachine: Using SSH client type: native
	I0412 20:25:45.012959  302775 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x7d71c0] 0x7da220 <nil>  [] 0s} 127.0.0.1 49437 <nil> <nil>}
	I0412 20:25:45.012982  302775 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20220412201228-42006' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20220412201228-42006/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20220412201228-42006' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0412 20:25:45.132428  302775 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I0412 20:25:45.132458  302775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.mini
kube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube}
	I0412 20:25:45.132515  302775 ubuntu.go:177] setting up certificates
	I0412 20:25:45.132527  302775 provision.go:83] configureAuth start
	I0412 20:25:45.132583  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.167292  302775 provision.go:138] copyHostCerts
	I0412 20:25:45.167378  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem, removing ...
	I0412 20:25:45.167393  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem
	I0412 20:25:45.167463  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.pem (1082 bytes)
	I0412 20:25:45.167565  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem, removing ...
	I0412 20:25:45.167579  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem
	I0412 20:25:45.167616  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cert.pem (1123 bytes)
	I0412 20:25:45.167686  302775 exec_runner.go:144] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem, removing ...
	I0412 20:25:45.167698  302775 exec_runner.go:207] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem
	I0412 20:25:45.167731  302775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/key.pem (1675 bytes)
	I0412 20:25:45.167790  302775 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20220412201228-42006 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-different-port-20220412201228-42006]
	I0412 20:25:45.287902  302775 provision.go:172] copyRemoteCerts
	I0412 20:25:45.287991  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0412 20:25:45.288040  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.322519  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.411995  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0412 20:25:45.430261  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server.pem --> /etc/docker/server.pem (1310 bytes)
	I0412 20:25:45.448712  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0412 20:25:45.466551  302775 provision.go:86] duration metric: configureAuth took 334.00574ms
	I0412 20:25:45.466577  302775 ubuntu.go:193] setting minikube options for container-runtime
	I0412 20:25:45.466762  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:25:45.466775  302775 machine.go:91] provisioned docker machine in 3.659438406s
	I0412 20:25:45.466782  302775 start.go:306] post-start starting for "default-k8s-different-port-20220412201228-42006" (driver="docker")
	I0412 20:25:45.466788  302775 start.go:316] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0412 20:25:45.466829  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0412 20:25:45.466867  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.501481  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.588112  302775 ssh_runner.go:195] Run: cat /etc/os-release
	I0412 20:25:45.591046  302775 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0412 20:25:45.591069  302775 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0412 20:25:45.591080  302775 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0412 20:25:45.591089  302775 info.go:137] Remote host: Ubuntu 20.04.4 LTS
	I0412 20:25:45.591103  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/addons for local assets ...
	I0412 20:25:45.591152  302775 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files for local assets ...
	I0412 20:25:45.591229  302775 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem -> 420062.pem in /etc/ssl/certs
	I0412 20:25:45.591327  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0412 20:25:45.598574  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:45.617879  302775 start.go:309] post-start completed in 151.076407ms
	I0412 20:25:45.617968  302775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 20:25:45.618023  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.652386  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.736884  302775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0412 20:25:45.741043  302775 fix.go:57] fixHost completed within 4.445551228s
	I0412 20:25:45.741076  302775 start.go:81] releasing machines lock for "default-k8s-different-port-20220412201228-42006", held for 4.445612789s
	I0412 20:25:45.741159  302775 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775496  302775 ssh_runner.go:195] Run: systemctl --version
	I0412 20:25:45.775542  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.775584  302775 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0412 20:25:45.775646  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:25:45.812306  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.812626  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:25:45.921246  302775 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0412 20:25:45.933022  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0412 20:25:45.942974  302775 docker.go:183] disabling docker service ...
	I0412 20:25:45.943055  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0412 20:25:45.953239  302775 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0412 20:25:45.962782  302775 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0412 20:25:46.404485  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:48.404784  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:46.529944  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:48.530319  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:46.046623  302775 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0412 20:25:46.129007  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0412 20:25:46.138577  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0412 20:25:46.152328  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "dmVyc2lvbiA9IDIKcm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLiJpby5jb250YWluZXJkLm1vbml0b3IudjEuY2dyb3VwcyJdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZ
mFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CiAgICByZXN0cmljdF9vb21fc2NvcmVfYWRqID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgZGlzY2FyZF91bnBhY2tlZF9sYXllcnMgPSB0cnVlCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQuZGVmYXVsdF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnVudHJ1c3RlZF93b3JrbG9hZF9ydW50aW1lXQogICAgICAgIHJ1bnRpbWVfdHlwZSA9ICIiCiAgICAgICAgcnVudGltZV9lbmdpbmUgPSAiIgogICAgICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICBTeXN0ZW1kQ
2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY25pXQogICAgICBiaW5fZGlyID0gIi9vcHQvY25pL2JpbiIKICAgICAgY29uZl9kaXIgPSAiL2V0Yy9jbmkvbmV0Lm1rIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuc2VydmljZS52MS5kaWZmLXNlcnZpY2UiXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ2MudjEuc2NoZWR1bGVyIl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0412 20:25:46.166473  302775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0412 20:25:46.173272  302775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0412 20:25:46.180113  302775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0412 20:25:46.251894  302775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0412 20:25:46.327719  302775 start.go:441] Will wait 60s for socket path /run/containerd/containerd.sock
	I0412 20:25:46.327799  302775 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0412 20:25:46.331793  302775 start.go:462] Will wait 60s for crictl version
	I0412 20:25:46.331863  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:46.357306  302775 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2022-04-12T20:25:46Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0412 20:25:50.405078  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:52.905509  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:51.029894  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:53.030953  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:55.529321  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.404189  302775 ssh_runner.go:195] Run: sudo crictl version
	I0412 20:25:57.428756  302775 start.go:471] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.5.10
	RuntimeApiVersion:  v1alpha2
	I0412 20:25:57.428821  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.451527  302775 ssh_runner.go:195] Run: containerd --version
	I0412 20:25:57.476141  302775 out.go:176] * Preparing Kubernetes v1.23.5 on containerd 1.5.10 ...
	I0412 20:25:57.476238  302775 cli_runner.go:164] Run: docker network inspect default-k8s-different-port-20220412201228-42006 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0412 20:25:57.510584  302775 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0412 20:25:57.514080  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:55.405528  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.904637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:25:57.529524  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:59.529890  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:25:57.525999  302775 out.go:176]   - kubelet.cni-conf-dir=/etc/cni/net.mk
	I0412 20:25:57.526084  302775 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 20:25:57.526141  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.550533  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.550557  302775 containerd.go:521] Images already preloaded, skipping extraction
	I0412 20:25:57.550612  302775 ssh_runner.go:195] Run: sudo crictl images --output json
	I0412 20:25:57.574550  302775 containerd.go:607] all images are preloaded for containerd runtime.
	I0412 20:25:57.574580  302775 cache_images.go:84] Images are preloaded, skipping loading
	I0412 20:25:57.574639  302775 ssh_runner.go:195] Run: sudo crictl info
	I0412 20:25:57.599639  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:25:57.599668  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:25:57.599690  302775 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0412 20:25:57.599711  302775 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8444 KubernetesVersion:v1.23.5 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20220412201228-42006 NodeName:default-k8s-different-port-20220412201228-42006 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49
.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0412 20:25:57.599848  302775 kubeadm.go:162] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20220412201228-42006"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.23.5
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0412 20:25:57.599941  302775 kubeadm.go:936] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.23.5/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cni-conf-dir=/etc/cni/net.mk --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20220412201228-42006 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0412 20:25:57.600004  302775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.5
	I0412 20:25:57.607520  302775 binaries.go:44] Found k8s binaries, skipping transfer
	I0412 20:25:57.607582  302775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0412 20:25:57.614505  302775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (592 bytes)
	I0412 20:25:57.627492  302775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0412 20:25:57.640002  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2076 bytes)
	I0412 20:25:57.652626  302775 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0412 20:25:57.655502  302775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0412 20:25:57.664909  302775 certs.go:54] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006 for IP: 192.168.49.2
	I0412 20:25:57.665006  302775 certs.go:182] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key
	I0412 20:25:57.665052  302775 certs.go:182] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key
	I0412 20:25:57.665122  302775 certs.go:298] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/client.key
	I0412 20:25:57.665173  302775 certs.go:298] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key.dd3b5fb2
	I0412 20:25:57.665208  302775 certs.go:298] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key
	I0412 20:25:57.665293  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem (1338 bytes)
	W0412 20:25:57.665321  302775 certs.go:384] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006_empty.pem, impossibly tiny 0 bytes
	I0412 20:25:57.665332  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca-key.pem (1679 bytes)
	I0412 20:25:57.665358  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/ca.pem (1082 bytes)
	I0412 20:25:57.665384  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/cert.pem (1123 bytes)
	I0412 20:25:57.665409  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/key.pem (1675 bytes)
	I0412 20:25:57.665455  302775 certs.go:388] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem (1708 bytes)
	I0412 20:25:57.666053  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0412 20:25:57.683954  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0412 20:25:57.701541  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0412 20:25:57.719461  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/default-k8s-different-port-20220412201228-42006/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0412 20:25:57.737734  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0412 20:25:57.756457  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0412 20:25:57.774968  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0412 20:25:57.793059  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0412 20:25:57.810982  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0412 20:25:57.829015  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/certs/42006.pem --> /usr/share/ca-certificates/42006.pem (1338 bytes)
	I0412 20:25:57.847312  302775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/ssl/certs/420062.pem --> /usr/share/ca-certificates/420062.pem (1708 bytes)
	I0412 20:25:57.864991  302775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0412 20:25:57.878055  302775 ssh_runner.go:195] Run: openssl version
	I0412 20:25:57.883971  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/420062.pem && ln -fs /usr/share/ca-certificates/420062.pem /etc/ssl/certs/420062.pem"
	I0412 20:25:57.892175  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895736  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Apr 12 19:26 /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.895785  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/420062.pem
	I0412 20:25:57.900802  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/420062.pem /etc/ssl/certs/3ec20f2e.0"
	I0412 20:25:57.908397  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0412 20:25:57.916262  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919469  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 12 19:21 /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.919524  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0412 20:25:57.924891  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0412 20:25:57.932113  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/42006.pem && ln -fs /usr/share/ca-certificates/42006.pem /etc/ssl/certs/42006.pem"
	I0412 20:25:57.940241  302775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943396  302775 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Apr 12 19:26 /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.943447  302775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/42006.pem
	I0412 20:25:57.948339  302775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/42006.pem /etc/ssl/certs/51391683.0"
	I0412 20:25:57.955118  302775 kubeadm.go:391] StartCluster: {Name:default-k8s-different-port-20220412201228-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:default-k8s-different-port-20220412201228-42006 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.5.1@sha256:cc746e7a0b1eec0db01cbabbb6386b23d7af97e79fa9e36bb883a95b7eb96fe2 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledS
top:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 20:25:57.955221  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0412 20:25:57.955270  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:25:57.980566  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:25:57.980602  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:25:57.980613  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:25:57.980624  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:25:57.980634  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:25:57.980651  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:25:57.980666  302775 cri.go:87] found id: ""
	I0412 20:25:57.980719  302775 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0412 20:25:57.995137  302775 cri.go:114] JSON = null
	W0412 20:25:57.995186  302775 kubeadm.go:398] unpause failed: list paused: list returned 0 containers, but ps returned 6
	I0412 20:25:57.995232  302775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0412 20:25:58.002528  302775 kubeadm.go:402] found existing configuration files, will attempt cluster restart
	I0412 20:25:58.002554  302775 kubeadm.go:601] restartCluster start
	I0412 20:25:58.002599  302775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0412 20:25:58.009347  302775 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.010180  302775 kubeconfig.go:116] verify returned: extract IP: "default-k8s-different-port-20220412201228-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:25:58.010679  302775 kubeconfig.go:127] "default-k8s-different-port-20220412201228-42006" context is missing from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig - will repair!
	I0412 20:25:58.011431  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:25:58.013184  302775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0412 20:25:58.020529  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.020588  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.029161  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.229565  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.229683  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.238841  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.430075  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.430153  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.439240  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.629511  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.629591  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.638727  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:58.829920  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:58.830002  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:58.839034  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.030207  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.030273  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.038870  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.230141  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.230228  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.239506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.429823  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.429895  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.438940  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.630148  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.630223  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.639014  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.830279  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:25:59.830365  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:25:59.839400  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.029480  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.029578  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.039506  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.229819  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.229932  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.238666  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.429971  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.430041  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.439152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.629391  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.629472  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.638771  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:00.830087  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:00.830179  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:00.839152  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:25:59.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:01.905660  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:02.030088  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:04.030403  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:01.029653  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.029717  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.038688  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.038731  302775 api_server.go:165] Checking apiserver status ...
	I0412 20:26:01.038777  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0412 20:26:01.047040  302775 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.047087  302775 kubeadm.go:576] needs reconfigure: apiserver error: timed out waiting for the condition
	I0412 20:26:01.047098  302775 kubeadm.go:1067] stopping kube-system containers ...
	I0412 20:26:01.047119  302775 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0412 20:26:01.047173  302775 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0412 20:26:01.074252  302775 cri.go:87] found id: "9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63"
	I0412 20:26:01.074279  302775 cri.go:87] found id: "e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848"
	I0412 20:26:01.074289  302775 cri.go:87] found id: "51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646"
	I0412 20:26:01.074295  302775 cri.go:87] found id: "3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd"
	I0412 20:26:01.074302  302775 cri.go:87] found id: "1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c"
	I0412 20:26:01.074309  302775 cri.go:87] found id: "71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda"
	I0412 20:26:01.074316  302775 cri.go:87] found id: ""
	I0412 20:26:01.074322  302775 cri.go:232] Stopping containers: [9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda]
	I0412 20:26:01.074376  302775 ssh_runner.go:195] Run: which crictl
	I0412 20:26:01.077493  302775 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop 9833ae46466ccbb9055512e120cc659bee6ff8ac05bf843caeb9217333fd6b63 e86db06fb9ce1685b312bc36622f28895b85dab6e39ee399901dce4efc6da848 51def5f5fb57c8ab61a9c585b1fe038e725e93a3a81684c7e48cceffbcd0e646 3c8657a1a5932876c532e5632e32b1b7bd034c015a4b5519a1ff53cf749d1ffd 1032ec9dc604b2d805be253a0f7df89424fc5ef71ef86566ee57cd79cf66939c 71af7fb31571e3cef12dcdba3ab49897e95bdbe6c1d9d6d5bbb1c36c97242cda
	I0412 20:26:01.103072  302775 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0412 20:26:01.114425  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:26:01.122172  302775 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Apr 12 20:12 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Apr 12 20:12 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2127 Apr 12 20:13 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Apr 12 20:12 /etc/kubernetes/scheduler.conf
	
	I0412 20:26:01.122241  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0412 20:26:01.129554  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0412 20:26:01.136877  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.143698  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.143755  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0412 20:26:01.150238  302775 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0412 20:26:01.157232  302775 kubeadm.go:166] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0412 20:26:01.157288  302775 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0412 20:26:01.164343  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171782  302775 kubeadm.go:678] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0412 20:26:01.171805  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.218060  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.745379  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.885213  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:01.938174  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:02.011809  302775 api_server.go:51] waiting for apiserver process to appear ...
	I0412 20:26:02.011879  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:02.521271  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.021279  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:03.521794  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.021460  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.521473  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.021310  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:05.521258  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:04.405325  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.905312  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:06.529561  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:08.530280  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:06.022069  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:06.522094  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.022120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:07.521096  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.021120  302775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 20:26:08.091617  302775 api_server.go:71] duration metric: took 6.079806462s to wait for apiserver process to appear ...
	I0412 20:26:08.091701  302775 api_server.go:87] waiting for apiserver healthz status ...
	I0412 20:26:08.091726  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:08.092170  302775 api_server.go:256] stopped: https://192.168.49.2:8444/healthz: Get "https://192.168.49.2:8444/healthz": dial tcp 192.168.49.2:8444: connect: connection refused
	I0412 20:26:08.592673  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.086493  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.086525  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.092362  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.097010  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0412 20:26:11.097085  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0412 20:26:11.592382  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:11.597320  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:11.597353  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.092945  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.097452  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0412 20:26:12.097482  302775 api_server.go:102] status: https://192.168.49.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0412 20:26:12.593112  302775 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8444/healthz ...
	I0412 20:26:12.598178  302775 api_server.go:266] https://192.168.49.2:8444/healthz returned 200:
	ok
	I0412 20:26:12.604429  302775 api_server.go:140] control plane version: v1.23.5
	I0412 20:26:12.604455  302775 api_server.go:130] duration metric: took 4.512735667s to wait for apiserver health ...
	I0412 20:26:12.604466  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:26:12.604475  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:26:09.405613  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.905154  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:11.029929  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:13.030209  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:15.530013  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:12.607164  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:26:12.607235  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:26:12.610895  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:26:12.610917  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:26:12.624805  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:26:13.514228  302775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0412 20:26:13.521326  302775 system_pods.go:59] 9 kube-system pods found
	I0412 20:26:13.521387  302775 system_pods.go:61] "coredns-64897985d-c2gzm" [17d60869-0f98-4975-877a-d2ac69c4c6c2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521400  302775 system_pods.go:61] "etcd-default-k8s-different-port-20220412201228-42006" [90ac8791-2f40-445e-a751-748814d43a72] Running
	I0412 20:26:13.521415  302775 system_pods.go:61] "kindnet-852v4" [d4596d79-4aba-4c96-9fd5-c2c2b2010810] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0412 20:26:13.521437  302775 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20220412201228-42006" [a3eb3b43-f13c-4205-9caf-0b3914050d7c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0412 20:26:13.521450  302775 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20220412201228-42006" [fca7914c-0a48-40de-af60-44c695d023c7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0412 20:26:13.521456  302775 system_pods.go:61] "kube-proxy-nfsgp" [fb26fa90-e38d-4c50-bbdc-aa46859bef70] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0412 20:26:13.521466  302775 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20220412201228-42006" [9fbd69c6-cf7b-4801-b028-f7729f80bf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0412 20:26:13.521475  302775 system_pods.go:61] "metrics-server-b955d9d8-8z9c9" [e954cf67-0a7d-42ed-b754-921b79512531] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521484  302775 system_pods.go:61] "storage-provisioner" [c1d494a3-740b-43f4-bd16-12e781074fdd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.)
	I0412 20:26:13.521493  302775 system_pods.go:74] duration metric: took 7.243145ms to wait for pod list to return data ...
	I0412 20:26:13.521504  302775 node_conditions.go:102] verifying NodePressure condition ...
	I0412 20:26:13.524664  302775 node_conditions.go:122] node storage ephemeral capacity is 304695084Ki
	I0412 20:26:13.524723  302775 node_conditions.go:123] node cpu capacity is 8
	I0412 20:26:13.524744  302775 node_conditions.go:105] duration metric: took 3.23136ms to run NodePressure ...
	I0412 20:26:13.524771  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0412 20:26:13.661578  302775 kubeadm.go:737] waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665722  302775 kubeadm.go:752] kubelet initialised
	I0412 20:26:13.665746  302775 kubeadm.go:753] duration metric: took 4.136738ms waiting for restarted kubelet to initialise ...
	I0412 20:26:13.665755  302775 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:26:13.670837  302775 pod_ready.go:78] waiting up to 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	I0412 20:26:15.676828  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:14.405001  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:16.405140  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.405282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:18.029626  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:20.029796  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:18.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.676699  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:20.904768  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.905306  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:22.530289  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:25.030441  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:22.676917  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.177312  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:25.405505  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.405547  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:27.529706  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:29.529954  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:27.677396  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:30.176836  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:29.904767  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:31.905389  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:32.029879  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:34.030539  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:32.177928  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.676583  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:34.405637  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.904807  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:36.030819  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:38.529411  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:40.529737  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:36.676861  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:38.676927  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:39.404491  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:41.404659  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.905243  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:43.029801  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:45.030177  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:41.177333  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:43.177431  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:45.177567  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:46.404939  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:48.405023  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:47.529990  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:50.029848  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:47.676992  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.177314  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:50.904925  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.905456  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:52.529958  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:54.530211  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:52.677354  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.177581  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:55.404968  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.904806  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:26:57.029172  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:59.029355  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:26:57.177797  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.676784  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:26:59.905303  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:02.404803  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:01.030119  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:03.529481  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:02.176739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.677083  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:04.904522  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.905502  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:06.030007  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:08.529404  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:07.177282  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.677448  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:09.405228  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.905282  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:11.029791  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:13.030282  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:15.529429  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:12.176384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.177069  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:14.404646  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:16.405558  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:18.905261  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:17.530006  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:20.030016  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:16.177280  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:18.677413  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:21.405385  289404 node_ready.go:58] node "old-k8s-version-20220412200421-42006" has status "Ready":"False"
	I0412 20:27:22.907629  289404 node_ready.go:38] duration metric: took 4m0.012711851s waiting for node "old-k8s-version-20220412200421-42006" to be "Ready" ...
	I0412 20:27:22.910753  289404 out.go:176] 
	W0412 20:27:22.910934  289404 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:22.910950  289404 out.go:241] * 
	W0412 20:27:22.911829  289404 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:22.030056  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:24.529656  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:21.176971  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:23.676778  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:25.677210  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:27.029850  293188 node_ready.go:58] node "embed-certs-20220412200510-42006" has status "Ready":"False"
	I0412 20:27:27.532457  293188 node_ready.go:38] duration metric: took 4m0.016261704s waiting for node "embed-certs-20220412200510-42006" to be "Ready" ...
	I0412 20:27:27.535074  293188 out.go:176] 
	W0412 20:27:27.535184  293188 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:27:27.535195  293188 out.go:241] * 
	W0412 20:27:27.535868  293188 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0412 20:27:28.176545  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:30.177022  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:32.677020  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:35.177243  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:37.677194  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:40.176627  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:42.177209  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:44.677318  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:46.677818  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:49.176630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:51.676722  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:54.176912  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:56.177137  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:27:58.677009  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:01.177266  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:03.676844  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:06.176674  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:08.177076  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:10.177207  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:12.676641  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:15.176557  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:17.677002  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:19.677697  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:22.176483  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:24.676630  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:26.677667  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:29.177357  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:31.677367  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:34.176852  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:36.177402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:38.677164  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:41.177066  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:43.676983  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:46.177366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:48.677127  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:50.677295  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:53.177230  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:55.677228  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:28:58.176672  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:00.176822  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:02.676739  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:04.677056  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:06.677123  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:09.176984  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:11.677277  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:14.176562  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:16.176807  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:18.677182  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:21.177384  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:23.677402  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:26.176749  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:28.176804  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:30.177721  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:32.676621  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:34.677246  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:36.677802  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:39.176692  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:41.676441  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:43.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:45.677234  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:48.177008  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:50.677510  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:53.177088  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:55.677043  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:29:58.176812  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:00.177215  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:02.676366  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:04.676503  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:06.676719  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:08.677078  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:11.176385  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.176787  302775 pod_ready.go:102] pod "coredns-64897985d-c2gzm" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-12 20:13:22 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[]}
	I0412 20:30:13.673973  302775 pod_ready.go:81] duration metric: took 4m0.003097375s waiting for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" ...
	E0412 20:30:13.674004  302775 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "coredns-64897985d-c2gzm" in "kube-system" namespace to be "Ready" (will not retry!)
	I0412 20:30:13.674026  302775 pod_ready.go:38] duration metric: took 4m0.008261536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0412 20:30:13.674088  302775 kubeadm.go:605] restartCluster took 4m15.671526358s
	W0412 20:30:13.674261  302775 out.go:241] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0412 20:30:13.674296  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0412 20:30:15.434543  302775 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (1.760223538s)
	I0412 20:30:15.434648  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:15.444487  302775 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0412 20:30:15.452033  302775 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I0412 20:30:15.452119  302775 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0412 20:30:15.459066  302775 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0412 20:30:15.459111  302775 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.5:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0412 20:30:28.943093  302775 out.go:203]   - Generating certificates and keys ...
	I0412 20:30:28.946723  302775 out.go:203]   - Booting up control plane ...
	I0412 20:30:28.949531  302775 out.go:203]   - Configuring RBAC rules ...
	I0412 20:30:28.951251  302775 cni.go:93] Creating CNI manager for ""
	I0412 20:30:28.951270  302775 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 20:30:28.954437  302775 out.go:176] * Configuring CNI (Container Networking Interface) ...
	I0412 20:30:28.954502  302775 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0412 20:30:28.958449  302775 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.23.5/kubectl ...
	I0412 20:30:28.958473  302775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I0412 20:30:28.972610  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0412 20:30:29.581068  302775 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0412 20:30:29.581147  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006 minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.581148  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:29.588127  302775 ops.go:34] apiserver oom_adj: -16
	I0412 20:30:29.648666  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.229416  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:30.729281  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.229706  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:31.729052  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.228891  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:32.729287  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.228878  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:33.729605  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:34.729516  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.229278  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:35.729029  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.228984  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:36.729282  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.229296  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:37.729119  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.229274  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:38.729302  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.229163  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:39.728992  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.229522  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:40.729277  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.228750  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:41.729285  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.228910  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.729297  302775 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.5/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0412 20:30:42.795666  302775 kubeadm.go:1020] duration metric: took 13.214575797s to wait for elevateKubeSystemPrivileges.
	I0412 20:30:42.795702  302775 kubeadm.go:393] StartCluster complete in 4m44.840593181s
	I0412 20:30:42.795726  302775 settings.go:142] acquiring lock: {Name:mkaf0259d09993f7f0249c32b54fea561e21f88c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:42.795894  302775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 20:30:42.797959  302775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig: {Name:mk47182dadcc139652898f38b199a7292a3b4031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 20:30:43.316096  302775 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20220412201228-42006" rescaled to 1
	I0412 20:30:43.316236  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0412 20:30:43.316267  302775 addons.go:415] enableAddons start: toEnable=map[dashboard:true metrics-server:true], additional=[]
	I0412 20:30:43.316330  302775 addons.go:65] Setting storage-provisioner=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316365  302775 addons.go:65] Setting default-storageclass=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316387  302775 addons.go:65] Setting metrics-server=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316392  302775 addons.go:153] Setting addon storage-provisioner=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316399  302775 addons.go:153] Setting addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316231  302775 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8444 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0412 20:30:43.318925  302775 out.go:176] * Verifying Kubernetes components...
	I0412 20:30:43.316370  302775 addons.go:65] Setting dashboard=true in profile "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.319000  302775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 20:30:43.319019  302775 addons.go:153] Setting addon dashboard=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:43.316478  302775 config.go:178] Loaded profile config "default-k8s-different-port-20220412201228-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 20:30:43.316392  302775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.316403  302775 addons.go:165] addon storage-provisioner should already be in state true
	I0412 20:30:43.319204  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.316409  302775 addons.go:165] addon metrics-server should already be in state true
	I0412 20:30:43.319309  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	W0412 20:30:43.319076  302775 addons.go:165] addon dashboard should already be in state true
	I0412 20:30:43.319411  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.319521  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319712  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319812  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.319884  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.368004  302775 out.go:176]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369733  302775 out.go:176]   - Using image kubernetesui/dashboard:v2.5.1
	I0412 20:30:43.368143  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0412 20:30:43.369830  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0412 20:30:43.371713  302775 out.go:176]   - Using image k8s.gcr.io/echoserver:1.4
	I0412 20:30:43.369909  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.371811  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0412 20:30:43.371829  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0412 20:30:43.371894  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.373558  302775 out.go:176]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0412 20:30:43.373752  302775 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.373772  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0412 20:30:43.373846  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.384370  302775 addons.go:153] Setting addon default-storageclass=true in "default-k8s-different-port-20220412201228-42006"
	W0412 20:30:43.384406  302775 addons.go:165] addon default-storageclass should already be in state true
	I0412 20:30:43.384440  302775 host.go:66] Checking if "default-k8s-different-port-20220412201228-42006" exists ...
	I0412 20:30:43.384946  302775 cli_runner.go:164] Run: docker container inspect default-k8s-different-port-20220412201228-42006 --format={{.State.Status}}
	I0412 20:30:43.415524  302775 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:30:43.415635  302775 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.23.5/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0412 20:30:43.419849  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.421835  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.422931  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.441543  302775 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.441567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0412 20:30:43.441611  302775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-different-port-20220412201228-42006
	I0412 20:30:43.477201  302775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49437 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/default-k8s-different-port-20220412201228-42006/id_rsa Username:docker}
	I0412 20:30:43.584023  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0412 20:30:43.594296  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0412 20:30:43.594323  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0412 20:30:43.594540  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0412 20:30:43.594567  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0412 20:30:43.597433  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0412 20:30:43.611081  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0412 20:30:43.611109  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0412 20:30:43.612709  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0412 20:30:43.612735  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0412 20:30:43.695590  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0412 20:30:43.695620  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0412 20:30:43.695871  302775 addons.go:348] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.695896  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0412 20:30:43.713161  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0412 20:30:43.783491  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0412 20:30:43.783522  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0412 20:30:43.786723  302775 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0412 20:30:43.804035  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0412 20:30:43.804161  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0412 20:30:43.880364  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0412 20:30:43.880416  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0412 20:30:43.898688  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0412 20:30:43.898715  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0412 20:30:43.979407  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0412 20:30:43.979444  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0412 20:30:44.000255  302775 addons.go:348] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.000283  302775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0412 20:30:44.102994  302775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.5/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0412 20:30:44.494063  302775 addons.go:386] Verifying addon metrics-server=true in "default-k8s-different-port-20220412201228-42006"
	I0412 20:30:44.918251  302775 out.go:176] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0412 20:30:44.918280  302775 addons.go:417] enableAddons completed in 1.602020138s
	I0412 20:30:45.423200  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:47.923285  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:50.422835  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:52.923459  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:55.422462  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:30:57.923268  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:00.422559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:02.422789  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:04.422907  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:06.923381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:09.422313  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:11.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:13.922722  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:16.423078  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:18.423314  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:20.923142  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:22.923173  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:24.923329  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:27.423082  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:29.922381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:31.922796  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:33.923653  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:36.422332  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:38.423001  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:40.922454  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:42.923084  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:45.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:47.922302  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:49.924482  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:52.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:54.922902  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:56.923448  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:31:59.422807  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:01.422968  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:03.923510  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:06.422160  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:08.423365  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:10.922571  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:12.922895  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:14.923501  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:17.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:19.922939  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:22.421806  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:24.422759  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:26.423058  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:28.922712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:30.922856  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:33.422864  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:35.923228  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:38.423092  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:40.922749  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:42.923323  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:45.422441  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:47.423052  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:49.922914  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:51.923513  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:54.422949  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:56.423035  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:32:58.923416  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:01.422712  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:03.422921  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:05.923038  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:08.422910  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:10.923412  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:13.423048  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:15.922494  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:17.923130  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:19.923551  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:22.422029  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:24.422643  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:26.423175  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:28.923212  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:31.422303  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:33.423218  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:35.923095  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:38.422465  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:40.423119  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:42.924176  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:45.422942  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:47.923152  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:50.422822  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:52.923237  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:55.423255  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:57.923053  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:33:59.923203  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:01.923370  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:04.422633  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:06.922559  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:09.422887  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:11.423344  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:13.922945  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:16.423257  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:18.922588  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:20.923031  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:23.423271  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:25.423373  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:27.922498  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:29.922791  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:31.922929  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:34.423381  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:36.923060  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:38.923113  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:41.422479  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.422840  302775 node_ready.go:58] node "default-k8s-different-port-20220412201228-42006" has status "Ready":"False"
	I0412 20:34:43.425257  302775 node_ready.go:38] duration metric: took 4m0.009696502s waiting for node "default-k8s-different-port-20220412201228-42006" to be "Ready" ...
	I0412 20:34:43.428510  302775 out.go:176] 
	W0412 20:34:43.428724  302775 out.go:241] X Exiting due to GUEST_START: wait 6m0s for node: waiting for node to be ready: waitNodeCondition: timed out waiting for the condition
	W0412 20:34:43.428749  302775 out.go:241] * 
	W0412 20:34:43.429581  302775 out.go:241] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5530029bc72dd       6de166512aa22       About a minute ago   Exited              kindnet-cni               7                   e7f85670aab62
	e482baaa02b92       3c53fa8541f95       13 minutes ago       Running             kube-proxy                0                   7a88fea74a24c
	270d41bcba3e1       3fc1d62d65872       13 minutes ago       Running             kube-apiserver            2                   135f4c9f6133c
	93c8ad43087d3       b0c9e5e4dbb14       13 minutes ago       Running             kube-controller-manager   2                   18279564d681b
	34e686863f9b5       884d49d6d8c9f       13 minutes ago       Running             kube-scheduler            2                   159717a64a264
	c4cb54a089e01       25f8c7f3da61c       13 minutes ago       Running             etcd                      2                   5a70dacd4ef7d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Tue 2022-04-12 20:25:42 UTC, end at Tue 2022-04-12 20:43:47 UTC. --
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.321844074Z" level=warning msg="cleaning up after shim disconnected" id=cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda namespace=k8s.io
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.321861395Z" level=info msg="cleaning up dead shim"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.332784784Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:34:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4419\n"
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.498731718Z" level=info msg="RemoveContainer for \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\""
	Apr 12 20:34:36 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:34:36.503600658Z" level=info msg="RemoveContainer for \"3428c7637ac2b397c4c900b07892e76da5d2b2c188019b6951de3538d7755ba1\" returns successfully"
	Apr 12 20:37:23 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:23.919429272Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:6,}"
	Apr 12 20:37:23 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:23.932963717Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for &ContainerMetadata{Name:kindnet-cni,Attempt:6,} returns container id \"fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830\""
	Apr 12 20:37:23 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:23.933561103Z" level=info msg="StartContainer for \"fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830\""
	Apr 12 20:37:24 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:24.184817140Z" level=info msg="StartContainer for \"fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830\" returns successfully"
	Apr 12 20:37:34 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:34.419819172Z" level=info msg="shim disconnected" id=fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830
	Apr 12 20:37:34 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:34.419897875Z" level=warning msg="cleaning up after shim disconnected" id=fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830 namespace=k8s.io
	Apr 12 20:37:34 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:34.419919865Z" level=info msg="cleaning up dead shim"
	Apr 12 20:37:34 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:34.430668391Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:37:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4761\n"
	Apr 12 20:37:34 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:34.799211796Z" level=info msg="RemoveContainer for \"cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda\""
	Apr 12 20:37:34 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:37:34.803677761Z" level=info msg="RemoveContainer for \"cb781bd82f1bd82d9f6bdd2f4b6145a1671fc68f827524d1a49f6cd422e44fda\" returns successfully"
	Apr 12 20:42:45 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:45.919585301Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:7,}"
	Apr 12 20:42:45 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:45.936163792Z" level=info msg="CreateContainer within sandbox \"e7f85670aab62d31b92969730ab69e718b4e4e593fb5dbb7fd69a13e8b1e1b80\" for &ContainerMetadata{Name:kindnet-cni,Attempt:7,} returns container id \"5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c\""
	Apr 12 20:42:45 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:45.936688559Z" level=info msg="StartContainer for \"5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c\""
	Apr 12 20:42:46 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:46.084346587Z" level=info msg="StartContainer for \"5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c\" returns successfully"
	Apr 12 20:42:56 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:56.412609617Z" level=info msg="shim disconnected" id=5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c
	Apr 12 20:42:56 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:56.412669485Z" level=warning msg="cleaning up after shim disconnected" id=5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c namespace=k8s.io
	Apr 12 20:42:56 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:56.412678527Z" level=info msg="cleaning up dead shim"
	Apr 12 20:42:56 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:56.423467884Z" level=warning msg="cleanup warnings time=\"2022-04-12T20:42:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4869\n"
	Apr 12 20:42:57 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:57.347378733Z" level=info msg="RemoveContainer for \"fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830\""
	Apr 12 20:42:57 default-k8s-different-port-20220412201228-42006 containerd[345]: time="2022-04-12T20:42:57.352589906Z" level=info msg="RemoveContainer for \"fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-different-port-20220412201228-42006
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-different-port-20220412201228-42006
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=dcd548d63d1c0dcbdc0ffc0bd37d4379117c142f
	                    minikube.k8s.io/name=default-k8s-different-port-20220412201228-42006
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_04_12T20_30_29_0700
	                    minikube.k8s.io/version=v1.25.2
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Apr 2022 20:30:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	                    node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-different-port-20220412201228-42006
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Apr 2022 20:43:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Apr 2022 20:40:57 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Apr 2022 20:40:57 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Apr 2022 20:40:57 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Tue, 12 Apr 2022 20:40:57 +0000   Tue, 12 Apr 2022 20:30:23 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    default-k8s-different-port-20220412201228-42006
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304695084Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32873828Ki
	  pods:               110
	System Info:
	  Machine ID:                 140a143b31184b58be947b52a01fff83
	  System UUID:                ef825856-4086-4c06-9629-95bede787d92
	  Boot ID:                    16b2caa1-c1b9-4ccc-85b8-d4dc3f51a5e1
	  Kernel Version:             5.13.0-1023-gcp
	  OS Image:                   Ubuntu 20.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.5.10
	  Kubelet Version:            v1.23.5
	  Kube-Proxy Version:         v1.23.5
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                                       ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-default-k8s-different-port-20220412201228-42006                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-hj8ss                                                              100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13m
	  kube-system                 kube-apiserver-default-k8s-different-port-20220412201228-42006             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-different-port-20220412201228-42006    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-6qsrn                                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-different-port-20220412201228-42006             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             150Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From        Message
	  ----    ------                   ----               ----        -------
	  Normal  Starting                 13m                kube-proxy  
	  Normal  NodeHasSufficientMemory  13m (x5 over 13m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x4 over 13m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x3 over 13m)  kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet     Node default-k8s-different-port-20220412201228-42006 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet     Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +0.125166] IPv4: martian source 10.244.0.7 from 10.244.0.7, on dev vethe3e22a2f
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff d2 83 e6 b4 2e c9 08 06
	[  +0.519855] IPv4: martian source 10.244.0.8 from 10.244.0.8, on dev vethde433a44
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fe f7 53 8a eb 26 08 06
	[  +0.208112] IPv4: martian source 10.244.0.9 from 10.244.0.9, on dev veth05fda112
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 06 c9 f0 64 c1 d9 08 06
	[Apr12 20:12] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.026706] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023926] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.947865] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023840] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.019933] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +2.959880] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.007861] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	[  +1.023916] IPv4: martian source 10.244.0.43 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a d5 ab db 14 9b 08 06
	
	* 
	* ==> etcd [c4cb54a089e016fb617de68b938a6dc5f4fb174e64fbcd0bd528a56465898a39] <==
	* {"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-04-12T20:30:22.907Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:default-k8s-different-port-20220412201228-42006 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.096Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2022-04-12T20:30:23.097Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-04-12T20:30:23.097Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2022-04-12T20:40:23.528Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":696}
	{"level":"info","ts":"2022-04-12T20:40:23.529Z","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":696,"took":"660.736µs"}
	
	* 
	* ==> kernel <==
	*  20:43:47 up  3:26,  0 users,  load average: 0.06, 0.18, 0.55
	Linux default-k8s-different-port-20220412201228-42006 5.13.0-1023-gcp #28~20.04.1-Ubuntu SMP Wed Mar 30 03:51:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.4 LTS"
	
	* 
	* ==> kube-apiserver [270d41bcba3e1865495674af56cd4330a1e32e7c91d1b01dfd4ff7473395e341] <==
	* I0412 20:33:45.387300       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:35:26.743283       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:35:26.743371       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:35:26.743378       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:36:26.744212       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:36:26.744270       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:36:26.744278       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:38:26.744851       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:38:26.744948       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:38:26.744956       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:40:26.749350       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:40:26.749446       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:40:26.749460       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:41:26.749846       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:41:26.749931       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:41:26.749939       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0412 20:43:26.750476       1 handler_proxy.go:104] no RequestInfo found in the context
	E0412 20:43:26.750562       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0412 20:43:26.750572       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [93c8ad43087d3210b37b054a5ce8ed0bb95d75d9a5620ef164f8434c299fc123] <==
	* W0412 20:37:42.467164       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:38:12.058299       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:38:12.483749       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:38:42.066832       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:38:42.498137       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:39:12.079922       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:39:12.513479       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:39:42.090101       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:39:42.527977       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:40:12.098923       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:40:12.542537       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:40:42.111220       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:40:42.557737       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:41:12.122278       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:41:12.572118       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:41:42.133153       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:41:42.588122       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:42:12.142889       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:42:12.602048       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:42:42.152040       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:42:42.615870       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:43:12.161458       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:43:12.630484       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0412 20:43:42.170720       1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0412 20:43:42.645819       1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [e482baaa02b921af7a2d84713ae74d5e73f0045c7b5566cd1ca264037643afe1] <==
	* I0412 20:30:42.989823       1 node.go:163] Successfully retrieved node IP: 192.168.49.2
	I0412 20:30:42.989877       1 server_others.go:138] "Detected node IP" address="192.168.49.2"
	I0412 20:30:42.989910       1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0412 20:30:43.015484       1 server_others.go:206] "Using iptables Proxier"
	I0412 20:30:43.015529       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0412 20:30:43.015541       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I0412 20:30:43.015557       1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0412 20:30:43.016055       1 server.go:656] "Version info" version="v1.23.5"
	I0412 20:30:43.016766       1 config.go:226] "Starting endpoint slice config controller"
	I0412 20:30:43.016778       1 config.go:317] "Starting service config controller"
	I0412 20:30:43.016800       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0412 20:30:43.016801       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0412 20:30:43.116983       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0412 20:30:43.117015       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [34e686863f9b57d62f2cdd74d8adf7722e557fbf0077f3795f13ef4ae0783c90] <==
	* W0412 20:30:25.801595       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:25.801613       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:25.800817       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:30:25.802008       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:30:25.803969       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:25.803999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:26.623962       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0412 20:30:26.623997       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0412 20:30:26.672428       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0412 20:30:26.672463       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0412 20:30:26.831990       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:26.832034       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:26.832832       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0412 20:30:26.832862       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0412 20:30:26.858524       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0412 20:30:26.858562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0412 20:30:26.880852       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0412 20:30:26.880893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0412 20:30:26.921532       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0412 20:30:26.921580       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0412 20:30:26.948831       1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0412 20:30:26.948873       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0412 20:30:27.080498       1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0412 20:30:27.080530       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0412 20:30:29.997243       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Tue 2022-04-12 20:25:42 UTC, end at Tue 2022-04-12 20:43:47 UTC. --
	Apr 12 20:42:44 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:42:44.260929    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:42:45 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:42:45.917155    3117 scope.go:110] "RemoveContainer" containerID="fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830"
	Apr 12 20:42:49 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:42:49.261681    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:42:54 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:42:54.262981    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:42:57 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:42:57.346326    3117 scope.go:110] "RemoveContainer" containerID="fc2b4e185e90162b1150b09f1d5930ec90f7a811aa4e08e2e57ee445c8b11830"
	Apr 12 20:42:57 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:42:57.346606    3117 scope.go:110] "RemoveContainer" containerID="5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c"
	Apr 12 20:42:57 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:42:57.346899    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:42:59 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:42:59.263695    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:04 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:04.265306    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:08 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:43:08.917045    3117 scope.go:110] "RemoveContainer" containerID="5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c"
	Apr 12 20:43:08 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:08.917373    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:43:09 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:09.266692    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:14 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:14.268062    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:19 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:19.269476    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:20 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:43:20.916370    3117 scope.go:110] "RemoveContainer" containerID="5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c"
	Apr 12 20:43:20 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:20.916677    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:43:24 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:24.270888    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:29 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:29.271797    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:32 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:43:32.916284    3117 scope.go:110] "RemoveContainer" containerID="5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c"
	Apr 12 20:43:32 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:32.916592    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	Apr 12 20:43:34 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:34.273510    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:39 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:39.275107    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:44 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:44.276493    3117 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
	Apr 12 20:43:45 default-k8s-different-port-20220412201228-42006 kubelet[3117]: I0412 20:43:45.916866    3117 scope.go:110] "RemoveContainer" containerID="5530029bc72ddf8f78d89c0dacc3ff93656d2c134b3751af434731be503a7c9c"
	Apr 12 20:43:45 default-k8s-different-port-20220412201228-42006 kubelet[3117]: E0412 20:43:45.917289    3117 pod_workers.go:949] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kindnet-cni pod=kindnet-hj8ss_kube-system(fca962d5-1da5-4dc1-8931-01bf2864674f)\"" pod="kube-system/kindnet-hj8ss" podUID=fca962d5-1da5-4dc1-8931-01bf2864674f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557
helpers_test.go:272: ======> post-mortem[TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557: exit status 1 (70.668099ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-64897985d-979gq" not found
	Error from server (NotFound): pods "metrics-server-b955d9d8-splbx" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-56974995fc-wwmdw" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8469778f77-xs557" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context default-k8s-different-port-20220412201228-42006 describe pod coredns-64897985d-979gq metrics-server-b955d9d8-splbx storage-provisioner dashboard-metrics-scraper-56974995fc-wwmdw kubernetes-dashboard-8469778f77-xs557: exit status 1
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (542.51s)

                                                
                                    

Test pass (217/259)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 17.6
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.23.5/json-events 5.47
11 TestDownloadOnly/v1.23.5/preload-exists 0
15 TestDownloadOnly/v1.23.5/LogsDuration 0.08
17 TestDownloadOnly/v1.23.6-rc.0/json-events 7.44
18 TestDownloadOnly/v1.23.6-rc.0/preload-exists 0
22 TestDownloadOnly/v1.23.6-rc.0/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.34
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.21
25 TestDownloadOnlyKic 2.91
26 TestBinaryMirror 0.86
27 TestOffline 118.38
29 TestAddons/Setup 126.33
31 TestAddons/parallel/Registry 27.65
32 TestAddons/parallel/Ingress 24.19
33 TestAddons/parallel/MetricsServer 5.55
34 TestAddons/parallel/HelmTiller 11.05
36 TestAddons/parallel/CSI 55.46
38 TestAddons/serial/GCPAuth 38.91
39 TestAddons/StoppedEnableDisable 20.43
40 TestCertOptions 57.75
41 TestCertExpiration 440.99
43 TestForceSystemdFlag 51.18
44 TestForceSystemdEnv 68.22
45 TestKVMDriverInstallOrUpdate 4.04
49 TestErrorSpam/setup 39.68
50 TestErrorSpam/start 0.97
51 TestErrorSpam/status 1.18
52 TestErrorSpam/pause 2.08
53 TestErrorSpam/unpause 1.62
54 TestErrorSpam/stop 14.95
57 TestFunctional/serial/CopySyncFile 0
58 TestFunctional/serial/StartWithProxy 58.05
59 TestFunctional/serial/AuditLog 0
60 TestFunctional/serial/SoftStart 15.75
61 TestFunctional/serial/KubeContext 0.04
62 TestFunctional/serial/KubectlGetPods 0.18
65 TestFunctional/serial/CacheCmd/cache/add_remote 3.25
66 TestFunctional/serial/CacheCmd/cache/add_local 2.32
67 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
68 TestFunctional/serial/CacheCmd/cache/list 0.06
69 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
70 TestFunctional/serial/CacheCmd/cache/cache_reload 2
71 TestFunctional/serial/CacheCmd/cache/delete 0.13
72 TestFunctional/serial/MinikubeKubectlCmd 0.11
73 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
74 TestFunctional/serial/ExtraConfig 39.5
75 TestFunctional/serial/ComponentHealth 0.06
76 TestFunctional/serial/LogsCmd 1.17
77 TestFunctional/serial/LogsFileCmd 1.18
79 TestFunctional/parallel/ConfigCmd 0.45
80 TestFunctional/parallel/DashboardCmd 29.26
81 TestFunctional/parallel/DryRun 0.6
82 TestFunctional/parallel/InternationalLanguage 0.55
83 TestFunctional/parallel/StatusCmd 1.77
86 TestFunctional/parallel/ServiceCmd 13.46
87 TestFunctional/parallel/ServiceCmdConnect 10.88
88 TestFunctional/parallel/AddonsCmd 0.21
89 TestFunctional/parallel/PersistentVolumeClaim 28.05
91 TestFunctional/parallel/SSHCmd 0.83
92 TestFunctional/parallel/CpCmd 1.76
93 TestFunctional/parallel/MySQL 22.88
94 TestFunctional/parallel/FileSync 0.36
95 TestFunctional/parallel/CertSync 2.16
99 TestFunctional/parallel/NodeLabels 0.07
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.82
103 TestFunctional/parallel/Version/short 0.09
104 TestFunctional/parallel/Version/components 2.07
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.25
108 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
109 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
110 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
111 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
112 TestFunctional/parallel/ImageCommands/ImageBuild 5.08
113 TestFunctional/parallel/ImageCommands/Setup 1.54
114 TestFunctional/parallel/MountCmd/any-port 8.76
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.27
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.54
120 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.76
121 TestFunctional/parallel/MountCmd/specific-port 2.2
122 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.07
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.81
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.15
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.01
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.52
134 TestFunctional/parallel/ProfileCmd/profile_list 0.43
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
136 TestFunctional/delete_addon-resizer_images 0.1
137 TestFunctional/delete_my-image_image 0.03
138 TestFunctional/delete_minikube_cached_images 0.03
141 TestIngressAddonLegacy/StartLegacyK8sCluster 89.73
143 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.2
144 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
145 TestIngressAddonLegacy/serial/ValidateIngressAddons 38.63
148 TestJSONOutput/start/Command 88.14
149 TestJSONOutput/start/Audit 0
151 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
152 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
154 TestJSONOutput/pause/Command 0.73
155 TestJSONOutput/pause/Audit 0
157 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/unpause/Command 0.66
161 TestJSONOutput/unpause/Audit 0
163 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/stop/Command 15.77
167 TestJSONOutput/stop/Audit 0
169 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
171 TestErrorJSONOutput 0.3
173 TestKicCustomNetwork/create_custom_network 33.06
174 TestKicCustomNetwork/use_default_bridge_network 27.39
175 TestKicExistingNetwork 28.85
176 TestKicCustomSubnet 28.47
177 TestMainNoArgs 0.06
180 TestMountStart/serial/StartWithMountFirst 5.04
181 TestMountStart/serial/VerifyMountFirst 0.34
182 TestMountStart/serial/StartWithMountSecond 4.96
183 TestMountStart/serial/VerifyMountSecond 0.34
184 TestMountStart/serial/DeleteFirst 1.88
185 TestMountStart/serial/VerifyMountPostDelete 0.34
186 TestMountStart/serial/Stop 1.27
187 TestMountStart/serial/RestartStopped 6.44
188 TestMountStart/serial/VerifyMountPostStop 0.34
191 TestMultiNode/serial/FreshStart2Nodes 102.56
192 TestMultiNode/serial/DeployApp2Nodes 4.65
193 TestMultiNode/serial/PingHostFrom2Pods 0.84
194 TestMultiNode/serial/AddNode 41.71
195 TestMultiNode/serial/ProfileList 0.38
196 TestMultiNode/serial/CopyFile 12.3
197 TestMultiNode/serial/StopNode 7.04
198 TestMultiNode/serial/StartAfterStop 36.02
199 TestMultiNode/serial/RestartKeepsNodes 174.48
200 TestMultiNode/serial/DeleteNode 9.87
201 TestMultiNode/serial/StopMultiNode 40.52
202 TestMultiNode/serial/RestartMultiNode 113.47
203 TestMultiNode/serial/ValidateNameConflict 42.96
208 TestPreload 129.16
210 TestScheduledStopUnix 117.18
213 TestInsufficientStorage 17.67
214 TestRunningBinaryUpgrade 87.22
216 TestKubernetesUpgrade 167.25
217 TestMissingContainerUpgrade 152.78
218 TestStoppedBinaryUpgrade/Setup 0.55
220 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
221 TestNoKubernetes/serial/StartWithK8s 60.87
222 TestStoppedBinaryUpgrade/Upgrade 117.82
223 TestNoKubernetes/serial/StartWithStopK8s 19.05
224 TestNoKubernetes/serial/Start 4.54
225 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
226 TestNoKubernetes/serial/ProfileList 1.8
227 TestNoKubernetes/serial/Stop 1.31
228 TestNoKubernetes/serial/StartNoArgs 6
229 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.65
230 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
238 TestNetworkPlugins/group/false 0.72
251 TestNetworkPlugins/group/auto/Start 61.39
252 TestNetworkPlugins/group/custom-weave/Start 75.39
253 TestNetworkPlugins/group/auto/KubeletFlags 0.4
254 TestNetworkPlugins/group/auto/NetCatPod 13.33
255 TestNetworkPlugins/group/auto/DNS 0.15
256 TestNetworkPlugins/group/auto/Localhost 0.13
257 TestNetworkPlugins/group/auto/HairPin 0.13
258 TestNetworkPlugins/group/cilium/Start 82.62
259 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.47
260 TestNetworkPlugins/group/custom-weave/NetCatPod 8.39
262 TestNetworkPlugins/group/cilium/ControllerPod 5.02
263 TestNetworkPlugins/group/cilium/KubeletFlags 0.35
264 TestNetworkPlugins/group/cilium/NetCatPod 9.87
265 TestNetworkPlugins/group/cilium/DNS 0.15
266 TestNetworkPlugins/group/cilium/Localhost 0.14
267 TestNetworkPlugins/group/cilium/HairPin 0.13
268 TestNetworkPlugins/group/enable-default-cni/Start 61.67
269 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
270 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.26
273 TestNetworkPlugins/group/bridge/Start 315.56
277 TestStartStop/group/no-preload/serial/FirstStart 74.08
280 TestStartStop/group/no-preload/serial/DeployApp 9.52
281 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.68
282 TestStartStop/group/no-preload/serial/Stop 20.25
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
284 TestStartStop/group/no-preload/serial/SecondStart 325.13
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
286 TestNetworkPlugins/group/bridge/NetCatPod 9.28
290 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.01
291 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
292 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.51
293 TestStartStop/group/no-preload/serial/Pause 3.32
297 TestStartStop/group/newest-cni/serial/FirstStart 54.09
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.59
300 TestStartStop/group/newest-cni/serial/Stop 20.24
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
302 TestStartStop/group/newest-cni/serial/SecondStart 34.29
303 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
304 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
305 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
306 TestStartStop/group/newest-cni/serial/Pause 3.02
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.61
308 TestStartStop/group/old-k8s-version/serial/Stop 5.93
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.62
313 TestStartStop/group/embed-certs/serial/Stop 10.6
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.63
317 TestStartStop/group/default-k8s-different-port/serial/Stop 10.37
318 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
x
+
TestDownloadOnly/v1.16.0/json-events (17.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412192021-42006 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412192021-42006 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.602296366s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (17.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220412192021-42006
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220412192021-42006: exit status 85 (73.788727ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:20:21
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 19:20:21.141081   42018 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:20:21.141192   42018 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:20:21.141200   42018 out.go:310] Setting ErrFile to fd 2...
	I0412 19:20:21.141205   42018 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:20:21.141302   42018 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	W0412 19:20:21.141428   42018 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: no such file or directory
	I0412 19:20:21.141675   42018 out.go:304] Setting JSON to true
	I0412 19:20:21.142509   42018 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7374,"bootTime":1649783847,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:20:21.142585   42018 start.go:125] virtualization: kvm guest
	W0412 19:20:21.145481   42018 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball: no such file or directory
	I0412 19:20:21.145546   42018 notify.go:193] Checking for updates...
	I0412 19:20:21.147332   42018 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:20:21.183842   42018 docker.go:137] docker version: linux-20.10.14
	I0412 19:20:21.183978   42018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:20:21.585589   42018 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-04-12 19:20:21.20965758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:20:21.585705   42018 docker.go:254] overlay module found
	I0412 19:20:21.587865   42018 start.go:284] selected driver: docker
	I0412 19:20:21.587879   42018 start.go:801] validating driver "docker" against <nil>
	I0412 19:20:21.588088   42018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:20:21.678463   42018 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-04-12 19:20:21.614748655 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:20:21.678600   42018 start_flags.go:292] no existing cluster config was found, will generate one from the flags 
	I0412 19:20:21.679143   42018 start_flags.go:373] Using suggested 8000MB memory alloc based on sys=32103MB, container=32103MB
	I0412 19:20:21.679262   42018 start_flags.go:829] Wait components to verify : map[apiserver:true system_pods:true]
	I0412 19:20:21.679464   42018 cni.go:93] Creating CNI manager for ""
	I0412 19:20:21.679477   42018 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 19:20:21.679493   42018 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 19:20:21.679504   42018 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0412 19:20:21.679517   42018 start_flags.go:301] Found "CNI" CNI - setting NetworkPlugin=cni
	I0412 19:20:21.679534   42018 start_flags.go:306] config:
	{Name:download-only-20220412192021-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220412192021-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:20:21.681486   42018 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 19:20:21.682849   42018 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 19:20:21.682942   42018 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:20:21.724936   42018 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:20:21.724963   42018 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0412 19:20:21.725223   42018 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0412 19:20:21.725308   42018 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0412 19:20:21.796154   42018 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0412 19:20:21.796185   42018 cache.go:57] Caching tarball of preloaded images
	I0412 19:20:21.796388   42018 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 19:20:21.798680   42018 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:21.922497   42018 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0412 19:20:24.508467   42018 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:24.508569   42018 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:25.322128   42018 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on containerd
	I0412 19:20:25.322453   42018 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/download-only-20220412192021-42006/config.json ...
	I0412 19:20:25.322485   42018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/download-only-20220412192021-42006/config.json: {Name:mk7c532d70c94b6f4f75cf620b8d37f113c476b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0412 19:20:25.322657   42018 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0412 19:20:25.322897   42018 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220412192021-42006"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/json-events (5.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412192021-42006 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412192021-42006 --force --alsologtostderr --kubernetes-version=v1.23.5 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.466158046s)
--- PASS: TestDownloadOnly/v1.23.5/json-events (5.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/preload-exists
--- PASS: TestDownloadOnly/v1.23.5/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220412192021-42006
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220412192021-42006: exit status 85 (76.610165ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:20:38
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 19:20:38.819227   42165 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:20:38.819347   42165 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:20:38.819358   42165 out.go:310] Setting ErrFile to fd 2...
	I0412 19:20:38.819362   42165 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:20:38.819482   42165 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	W0412 19:20:38.819617   42165 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: no such file or directory
	I0412 19:20:38.819757   42165 out.go:304] Setting JSON to true
	I0412 19:20:38.820635   42165 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7392,"bootTime":1649783847,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:20:38.820717   42165 start.go:125] virtualization: kvm guest
	I0412 19:20:38.823505   42165 notify.go:193] Checking for updates...
	I0412 19:20:38.825706   42165 config.go:178] Loaded profile config "download-only-20220412192021-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0412 19:20:38.825766   42165 start.go:709] api.Load failed for download-only-20220412192021-42006: filestore "download-only-20220412192021-42006": Docker machine "download-only-20220412192021-42006" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0412 19:20:38.825825   42165 driver.go:346] Setting default libvirt URI to qemu:///system
	W0412 19:20:38.825857   42165 start.go:709] api.Load failed for download-only-20220412192021-42006: filestore "download-only-20220412192021-42006": Docker machine "download-only-20220412192021-42006" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0412 19:20:38.863394   42165 docker.go:137] docker version: linux-20.10.14
	I0412 19:20:38.863487   42165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:20:38.960025   42165 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-04-12 19:20:38.889417811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:20:38.960153   42165 docker.go:254] overlay module found
	I0412 19:20:38.962532   42165 start.go:284] selected driver: docker
	I0412 19:20:38.962546   42165 start.go:801] validating driver "docker" against &{Name:download-only-20220412192021-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-20220412192021-42006 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false}
	I0412 19:20:38.962789   42165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:20:39.058314   42165 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:33 SystemTime:2022-04-12 19:20:38.989622302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:20:39.058842   42165 cni.go:93] Creating CNI manager for ""
	I0412 19:20:39.058858   42165 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 19:20:39.058871   42165 start_flags.go:306] config:
	{Name:download-only-20220412192021-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220412192021-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:20:39.061184   42165 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 19:20:39.062673   42165 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:20:39.062707   42165 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:20:39.107475   42165 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:20:39.107501   42165 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0412 19:20:39.107714   42165 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0412 19:20:39.107731   42165 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory, skipping pull
	I0412 19:20:39.107735   42165 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in cache, skipping pull
	I0412 19:20:39.107747   42165 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 as a tarball
	I0412 19:20:39.184179   42165 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 19:20:39.184206   42165 cache.go:57] Caching tarball of preloaded images
	I0412 19:20:39.184420   42165 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:20:39.186854   42165 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:39.306365   42165 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.5/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4?checksum=md5:5b943d1614cebc406598598f3fb1d5ba -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4
	I0412 19:20:42.553053   42165 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:42.553159   42165 preload.go:256] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.5-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:43.456239   42165 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.5 on containerd
	I0412 19:20:43.456388   42165 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/download-only-20220412192021-42006/config.json ...
	I0412 19:20:43.456586   42165 preload.go:132] Checking if preload exists for k8s version v1.23.5 and runtime containerd
	I0412 19:20:43.456947   42165 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.23.5/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/linux/amd64/v1.23.5/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220412192021-42006"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.5/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/json-events (7.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412192021-42006 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20220412192021-42006 --force --alsologtostderr --kubernetes-version=v1.23.6-rc.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.44063366s)
--- PASS: TestDownloadOnly/v1.23.6-rc.0/json-events (7.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.23.6-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20220412192021-42006
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20220412192021-42006: exit status 85 (69.4384ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/04/12 19:20:44
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.18 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0412 19:20:44.363537   42312 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:20:44.363669   42312 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:20:44.363682   42312 out.go:310] Setting ErrFile to fd 2...
	I0412 19:20:44.363690   42312 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:20:44.363795   42312 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	W0412 19:20:44.363912   42312 root.go:300] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/config/config.json: no such file or directory
	I0412 19:20:44.364013   42312 out.go:304] Setting JSON to true
	I0412 19:20:44.364809   42312 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7398,"bootTime":1649783847,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:20:44.364870   42312 start.go:125] virtualization: kvm guest
	I0412 19:20:44.367781   42312 notify.go:193] Checking for updates...
	I0412 19:20:44.370250   42312 config.go:178] Loaded profile config "download-only-20220412192021-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	W0412 19:20:44.370312   42312 start.go:709] api.Load failed for download-only-20220412192021-42006: filestore "download-only-20220412192021-42006": Docker machine "download-only-20220412192021-42006" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0412 19:20:44.370357   42312 driver.go:346] Setting default libvirt URI to qemu:///system
	W0412 19:20:44.370384   42312 start.go:709] api.Load failed for download-only-20220412192021-42006: filestore "download-only-20220412192021-42006": Docker machine "download-only-20220412192021-42006" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0412 19:20:44.406156   42312 docker.go:137] docker version: linux-20.10.14
	I0412 19:20:44.406237   42312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:20:44.497822   42312 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-04-12 19:20:44.431950413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:20:44.497996   42312 docker.go:254] overlay module found
	I0412 19:20:44.499994   42312 start.go:284] selected driver: docker
	I0412 19:20:44.500009   42312 start.go:801] validating driver "docker" against &{Name:download-only-20220412192021-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:download-only-20220412192021-42006 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false}
	I0412 19:20:44.500329   42312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:20:44.585502   42312 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2022-04-12 19:20:44.526319851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:20:44.586005   42312 cni.go:93] Creating CNI manager for ""
	I0412 19:20:44.586019   42312 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0412 19:20:44.586031   42312 start_flags.go:306] config:
	{Name:download-only-20220412192021-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.6-rc.0 ClusterName:download-only-20220412192021-42006 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:20:44.588125   42312 cache.go:120] Beginning downloading kic base image for docker with containerd
	I0412 19:20:44.589423   42312 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 19:20:44.589558   42312 image.go:75] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
	I0412 19:20:44.629067   42312 image.go:79] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon, skipping pull
	I0412 19:20:44.629102   42312 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
	I0412 19:20:44.629337   42312 image.go:59] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
	I0412 19:20:44.629363   42312 image.go:62] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory, skipping pull
	I0412 19:20:44.629374   42312 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in cache, skipping pull
	I0412 19:20:44.629394   42312 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 as a tarball
	I0412 19:20:44.703412   42312 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4
	I0412 19:20:44.703457   42312 cache.go:57] Caching tarball of preloaded images
	I0412 19:20:44.703705   42312 preload.go:132] Checking if preload exists for k8s version v1.23.6-rc.0 and runtime containerd
	I0412 19:20:44.706030   42312 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0412 19:20:44.828120   42312 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.23.6-rc.0/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:ef3f489ac1b46c6f2e093fc88ca08f3d -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.23.6-rc.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20220412192021-42006"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.23.6-rc.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20220412192021-42006
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.91s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20220412192052-42006 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:228: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20220412192052-42006 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (1.789275655s)
helpers_test.go:175: Cleaning up "download-docker-20220412192052-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20220412192052-42006
--- PASS: TestDownloadOnlyKic (2.91s)

                                                
                                    
x
+
TestBinaryMirror (0.86s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-20220412192055-42006 --alsologtostderr --binary-mirror http://127.0.0.1:34247 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-20220412192055-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-20220412192055-42006
--- PASS: TestBinaryMirror (0.86s)

                                                
                                    
x
+
TestOffline (118.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20220412195003-42006 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20220412195003-42006 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (1m55.517638204s)
helpers_test.go:175: Cleaning up "offline-containerd-20220412195003-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20220412195003-42006

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20220412195003-42006: (2.865092246s)
--- PASS: TestOffline (118.38s)

                                                
                                    
x
+
TestAddons/Setup (126.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:75: (dbg) Run:  out/minikube-linux-amd64 start -p addons-20220412192056-42006 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:75: (dbg) Done: out/minikube-linux-amd64 start -p addons-20220412192056-42006 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m6.325681263s)
--- PASS: TestAddons/Setup (126.33s)

                                                
                                    
x
+
TestAddons/parallel/Registry (27.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:280: registry stabilized in 9.279083ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-v4rr2" [9adb6ea9-13b4-435e-8313-14fce67759b7] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:282: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008237628s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-87flk" [18dfb15d-3a7b-4f82-ba7a-4013621a6023] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007723727s
addons_test.go:290: (dbg) Run:  kubectl --context addons-20220412192056-42006 delete po -l run=registry-test --now
addons_test.go:295: (dbg) Run:  kubectl --context addons-20220412192056-42006 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: (dbg) Done: kubectl --context addons-20220412192056-42006 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (16.748546683s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 ip
2022/04/12 19:23:29 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:338: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (27.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (24.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:162: (dbg) Run:  kubectl --context addons-20220412192056-42006 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context addons-20220412192056-42006 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (1.398061667s)
addons_test.go:182: (dbg) Run:  kubectl --context addons-20220412192056-42006 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context addons-20220412192056-42006 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [f5d79a9b-cca2-4e0b-b649-ced5ad6fba68] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [f5d79a9b-cca2-4e0b-b649-ced5ad6fba68] Running
addons_test.go:200: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010348997s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context addons-20220412192056-42006 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable ingress-dns --alsologtostderr -v=1: (1.573149669s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable ingress --alsologtostderr -v=1: (7.904572553s)
--- PASS: TestAddons/parallel/Ingress (24.19s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:357: metrics-server stabilized in 9.049137ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:342: "metrics-server-bd6f4dd56-gqlts" [b90acc87-e3a0-43f9-84fc-e565a216296c] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:359: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008829006s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:365: (dbg) Run:  kubectl --context addons-20220412192056-42006 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.05s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:406: tiller-deploy stabilized in 9.127403ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:342: "tiller-deploy-6d67d5465d-6xjb6" [81568414-efd5-4ec3-b643-2756007c04e9] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:408: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008760767s

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Run:  kubectl --context addons-20220412192056-42006 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) Done: kubectl --context addons-20220412192056-42006 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.695605201s)
addons_test.go:428: kubectl --context addons-20220412192056-42006 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:440: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:511: csi-hostpath-driver pods stabilized in 10.881884ms
addons_test.go:514: (dbg) Run:  kubectl --context addons-20220412192056-42006 create -f testdata/csi-hostpath-driver/pvc.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:519: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220412192056-42006 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:524: (dbg) Run:  kubectl --context addons-20220412192056-42006 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:529: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [dd26da02-a9e7-42d5-ae2d-6afe0f3189f9] Pending
helpers_test.go:342: "task-pv-pod" [dd26da02-a9e7-42d5-ae2d-6afe0f3189f9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [dd26da02-a9e7-42d5-ae2d-6afe0f3189f9] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:529: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 30.006874676s
addons_test.go:534: (dbg) Run:  kubectl --context addons-20220412192056-42006 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:539: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220412192056-42006 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-20220412192056-42006 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:544: (dbg) Run:  kubectl --context addons-20220412192056-42006 delete pod task-pv-pod
addons_test.go:550: (dbg) Run:  kubectl --context addons-20220412192056-42006 delete pvc hpvc
addons_test.go:556: (dbg) Run:  kubectl --context addons-20220412192056-42006 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:561: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-20220412192056-42006 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:566: (dbg) Run:  kubectl --context addons-20220412192056-42006 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [b2815365-c973-4231-8eae-14ec87378200] Pending
helpers_test.go:342: "task-pv-pod-restore" [b2815365-c973-4231-8eae-14ec87378200] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [b2815365-c973-4231-8eae-14ec87378200] Running
addons_test.go:571: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 14.007030424s
addons_test.go:576: (dbg) Run:  kubectl --context addons-20220412192056-42006 delete pod task-pv-pod-restore
addons_test.go:580: (dbg) Run:  kubectl --context addons-20220412192056-42006 delete pvc hpvc-restore
addons_test.go:584: (dbg) Run:  kubectl --context addons-20220412192056-42006 delete volumesnapshot new-snapshot-demo
addons_test.go:588: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:588: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.897929412s)
addons_test.go:592: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.46s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (38.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:603: (dbg) Run:  kubectl --context addons-20220412192056-42006 create -f testdata/busybox.yaml
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [58f43593-3d62-48f5-b815-bd3ab018e3ed] Pending
helpers_test.go:342: "busybox" [58f43593-3d62-48f5-b815-bd3ab018e3ed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [58f43593-3d62-48f5-b815-bd3ab018e3ed] Running
addons_test.go:609: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 9.01257141s
addons_test.go:615: (dbg) Run:  kubectl --context addons-20220412192056-42006 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:652: (dbg) Run:  kubectl --context addons-20220412192056-42006 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:665: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:665: (dbg) Done: out/minikube-linux-amd64 -p addons-20220412192056-42006 addons disable gcp-auth --alsologtostderr -v=1: (5.775243552s)
addons_test.go:681: (dbg) Run:  out/minikube-linux-amd64 -p addons-20220412192056-42006 addons enable gcp-auth
addons_test.go:687: (dbg) Run:  kubectl --context addons-20220412192056-42006 apply -f testdata/private-image.yaml
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:342: "private-image-7f8587d5b7-knrgb" [e4120cac-24a7-4d9f-8a33-1416c8e42144] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])
helpers_test.go:342: "private-image-7f8587d5b7-knrgb" [e4120cac-24a7-4d9f-8a33-1416c8e42144] Running
addons_test.go:694: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image healthy within 14.005973785s
addons_test.go:700: (dbg) Run:  kubectl --context addons-20220412192056-42006 apply -f testdata/private-image-eu.yaml
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...
helpers_test.go:342: "private-image-eu-869dcfd8c7-2b64f" [867497a2-af97-4f05-844c-f6790f8591aa] Pending
helpers_test.go:342: "private-image-eu-869dcfd8c7-2b64f" [867497a2-af97-4f05-844c-f6790f8591aa] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])
helpers_test.go:342: "private-image-eu-869dcfd8c7-2b64f" [867497a2-af97-4f05-844c-f6790f8591aa] Running
addons_test.go:705: (dbg) TestAddons/serial/GCPAuth: integration-test=private-image-eu healthy within 8.006803402s
--- PASS: TestAddons/serial/GCPAuth (38.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (20.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:132: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-20220412192056-42006
addons_test.go:132: (dbg) Done: out/minikube-linux-amd64 stop -p addons-20220412192056-42006: (20.228629791s)
addons_test.go:136: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-20220412192056-42006
addons_test.go:140: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-20220412192056-42006
--- PASS: TestAddons/StoppedEnableDisable (20.43s)

                                                
                                    
x
+
TestCertOptions (57.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20220412195344-42006 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20220412195344-42006 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (49.585992879s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20220412195344-42006 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-20220412195344-42006 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-20220412195344-42006 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-20220412195344-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20220412195344-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20220412195344-42006: (7.177367887s)
--- PASS: TestCertOptions (57.75s)

                                                
                                    
x
+
TestCertExpiration (440.99s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220412195203-42006 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220412195203-42006 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (4m3.27273615s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-20220412195203-42006 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-20220412195203-42006 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (14.880287875s)
helpers_test.go:175: Cleaning up "cert-expiration-20220412195203-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-20220412195203-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-20220412195203-42006: (2.831640105s)
--- PASS: TestCertExpiration (440.99s)

                                                
                                    
x
+
TestForceSystemdFlag (51.18s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20220412195205-42006 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0412 19:52:17.856237   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20220412195205-42006 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (48.202031358s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20220412195205-42006 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-20220412195205-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20220412195205-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20220412195205-42006: (2.629863284s)
--- PASS: TestForceSystemdFlag (51.18s)

                                                
                                    
x
+
TestForceSystemdEnv (68.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20220412195003-42006 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:150: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20220412195003-42006 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m5.206221569s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20220412195003-42006 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-20220412195003-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20220412195003-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20220412195003-42006: (2.663761704s)
--- PASS: TestForceSystemdEnv (68.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.04s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.04s)

                                                
                                    
x
+
TestErrorSpam/setup (39.68s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20220412192503-42006 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220412192503-42006 --driver=docker  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20220412192503-42006 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20220412192503-42006 --driver=docker  --container-runtime=containerd: (39.681708649s)
error_spam_test.go:88: acceptable stderr: "! Your cgroup does not allow setting memory."
--- PASS: TestErrorSpam/setup (39.68s)

                                                
                                    
x
+
TestErrorSpam/start (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 start --dry-run
--- PASS: TestErrorSpam/start (0.97s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (2.08s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 pause: (1.030610846s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 pause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 pause
--- PASS: TestErrorSpam/pause (2.08s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (14.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 stop: (14.683050452s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20220412192503-42006 --log_dir /tmp/nospam-20220412192503-42006 stop
--- PASS: TestErrorSpam/stop (14.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1784: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/files/etc/test/nested/copy/42006/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (58.05s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2163: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412192609-42006 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2163: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220412192609-42006 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (58.050886566s)
--- PASS: TestFunctional/serial/StartWithProxy (58.05s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.75s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412192609-42006 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220412192609-42006 --alsologtostderr -v=8: (15.750800944s)
functional_test.go:658: soft start took 15.75156006s for "functional-20220412192609-42006" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.75s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-20220412192609-42006 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add k8s.gcr.io/pause:3.3: (1.496446718s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add k8s.gcr.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add k8s.gcr.io/pause:latest: (1.054626981s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20220412192609-42006 /tmp/TestFunctionalserialCacheCmdcacheadd_local797480645/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add minikube-local-cache-test:functional-20220412192609-42006
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 cache add minikube-local-cache-test:functional-20220412192609-42006: (2.013106457s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cache delete minikube-local-cache-test:functional-20220412192609-42006
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20220412192609-42006
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (364.424066ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cache reload
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 kubectl -- --context functional-20220412192609-42006 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-20220412192609-42006 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412192609-42006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0412 19:28:02.670367   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:02.676317   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:02.686580   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:02.706929   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:02.747247   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:02.827628   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:02.988063   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:03.308666   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:03.949645   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:05.230146   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:28:07.791953   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-20220412192609-42006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.499194697s)
functional_test.go:756: restart took 39.499316265s for "functional-20220412192609-42006" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-20220412192609-42006 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 logs: (1.166057106s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 logs --file /tmp/TestFunctionalserialLogsFileCmd649344604/001/logs.txt
E0412 19:28:12.912666   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 logs --file /tmp/TestFunctionalserialLogsFileCmd649344604/001/logs.txt: (1.175770987s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 config get cpus: exit status 14 (84.778857ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 config get cpus: exit status 14 (67.717651ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (29.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220412192609-42006 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20220412192609-42006 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 76296: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (29.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412192609-42006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220412192609-42006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (250.197504ms)

                                                
                                                
-- stdout --
	* [functional-20220412192609-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:28:41.811587   76771 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:28:41.811728   76771 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:28:41.811748   76771 out.go:310] Setting ErrFile to fd 2...
	I0412 19:28:41.811757   76771 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:28:41.811924   76771 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:28:41.812251   76771 out.go:304] Setting JSON to false
	I0412 19:28:41.813556   76771 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7875,"bootTime":1649783847,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:28:41.813670   76771 start.go:125] virtualization: kvm guest
	I0412 19:28:41.817112   76771 out.go:176] * [functional-20220412192609-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:28:41.818850   76771 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:28:41.820332   76771 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:28:41.821836   76771 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:28:41.823350   76771 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:28:41.824754   76771 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:28:41.825240   76771 config.go:178] Loaded profile config "functional-20220412192609-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:28:41.825762   76771 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:28:41.869320   76771 docker.go:137] docker version: linux-20.10.14
	I0412 19:28:41.869438   76771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:28:41.976167   76771 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-04-12 19:28:41.902776828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:28:41.976269   76771 docker.go:254] overlay module found
	I0412 19:28:41.979108   76771 out.go:176] * Using the docker driver based on existing profile
	I0412 19:28:41.979139   76771 start.go:284] selected driver: docker
	I0412 19:28:41.979147   76771 start.go:801] validating driver "docker" against &{Name:functional-20220412192609-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220412192609-42006 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:28:41.979291   76771 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:28:41.979336   76771 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:28:41.979359   76771 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:28:41.981098   76771 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:28:41.983211   76771 out.go:176] 
	W0412 19:28:41.983323   76771 out.go:241] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0412 19:28:41.984841   76771 out.go:176] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412192609-42006 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20220412192609-42006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20220412192609-42006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (552.274232ms)

                                                
                                                
-- stdout --
	* [functional-20220412192609-42006] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Utilisation du pilote docker basé sur le profil existant
	  - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:28:41.254195   76688 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:28:41.254299   76688 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:28:41.254307   76688 out.go:310] Setting ErrFile to fd 2...
	I0412 19:28:41.254312   76688 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:28:41.254466   76688 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:28:41.254701   76688 out.go:304] Setting JSON to false
	I0412 19:28:41.255813   76688 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":7874,"bootTime":1649783847,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:28:41.255890   76688 start.go:125] virtualization: kvm guest
	I0412 19:28:41.370415   76688 out.go:176] * [functional-20220412192609-42006] minikube v1.25.2 sur Ubuntu 20.04 (kvm/amd64)
	I0412 19:28:41.568405   76688 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:28:41.576130   76688 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:28:41.578245   76688 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:28:41.580474   76688 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:28:41.582440   76688 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:28:41.583018   76688 config.go:178] Loaded profile config "functional-20220412192609-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:28:41.583646   76688 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:28:41.629977   76688 docker.go:137] docker version: linux-20.10.14
	I0412 19:28:41.630101   76688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:28:41.725225   76688 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:76 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:39 SystemTime:2022-04-12 19:28:41.660209902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:28:41.725395   76688 docker.go:254] overlay module found
	I0412 19:28:41.727420   76688 out.go:176] * Utilisation du pilote docker basé sur le profil existant
	I0412 19:28:41.727449   76688 start.go:284] selected driver: docker
	I0412 19:28:41.727457   76688 start.go:801] validating driver "docker" against &{Name:functional-20220412192609-42006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.5 ClusterName:functional-20220412192609-42006 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.23.5 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
	I0412 19:28:41.727588   76688 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:28:41.727625   76688 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:28:41.727649   76688 out.go:241] ! Votre groupe de contrôle ne permet pas de définir la mémoire.
	! Votre groupe de contrôle ne permet pas de définir la mémoire.
	I0412 19:28:41.729381   76688 out.go:176]   - Plus d'informations: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:28:41.731216   76688 out.go:176] 
	W0412 19:28:41.731363   76688 out.go:241] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0412 19:28:41.732856   76688 out.go:176] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (13.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1435: (dbg) Run:  kubectl --context functional-20220412192609-42006 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-20220412192609-42006 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-54fbb85-9mqcb" [98cfea0d-017f-4fa1-93e9-3634f438886d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-54fbb85-9mqcb" [98cfea0d-017f-4fa1-93e9-3634f438886d] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.008865323s
functional_test.go:1451: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1451: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 service list: (1.004582904s)
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1478: found endpoint: https://192.168.49.2:31616
functional_test.go:1493: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 service hello-node --url --format={{.IP}}

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 service hello-node --url
functional_test.go:1513: found endpoint for hello-node: http://192.168.49.2:31616
--- PASS: TestFunctional/parallel/ServiceCmd (13.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1561: (dbg) Run:  kubectl --context functional-20220412192609-42006 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1567: (dbg) Run:  kubectl --context functional-20220412192609-42006 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-74cf8bc446-zld49" [6674da37-790b-47a7-a653-80a6fbe5e094] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:342: "hello-node-connect-74cf8bc446-zld49" [6674da37-790b-47a7-a653-80a6fbe5e094] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1572: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.012023182s
functional_test.go:1581: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 service hello-node-connect --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1587: found endpoint for hello-node-connect: http://192.168.49.2:30711
functional_test.go:1607: http://192.168.49.2:30711: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-74cf8bc446-zld49

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30711
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.88s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1622: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 addons list
functional_test.go:1634: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [5f01b911-92e3-4881-921b-4699292a3c60] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014946718s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20220412192609-42006 get storageclass -o=json

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20220412192609-42006 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20220412192609-42006 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220412192609-42006 apply -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [fb3f1346-09a2-4aa8-90d4-f7a931d44b51] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [fb3f1346-09a2-4aa8-90d4-f7a931d44b51] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [fb3f1346-09a2-4aa8-90d4-f7a931d44b51] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.007559764s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20220412192609-42006 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20220412192609-42006 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20220412192609-42006 delete -f testdata/storage-provisioner/pod.yaml: (1.208837356s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20220412192609-42006 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [91652a74-a72d-47bb-b2ed-a42594848b0a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [91652a74-a72d-47bb-b2ed-a42594848b0a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [91652a74-a72d-47bb-b2ed-a42594848b0a] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007967278s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20220412192609-42006 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.05s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1657: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1674: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh -n functional-20220412192609-42006 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 cp functional-20220412192609-42006:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1284172763/001/cp-test.txt
E0412 19:28:43.634930   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh -n functional-20220412192609-42006 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1722: (dbg) Run:  kubectl --context functional-20220412192609-42006 replace --force -f testdata/mysql.yaml
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:342: "mysql-b87c45988-ctlm8" [9cba2ba1-84c3-422b-ac1a-bc431b945fe3] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-ctlm8" [9cba2ba1-84c3-422b-ac1a-bc431b945fe3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-b87c45988-ctlm8" [9cba2ba1-84c3-422b-ac1a-bc431b945fe3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1728: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.043602089s
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412192609-42006 exec mysql-b87c45988-ctlm8 -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220412192609-42006 exec mysql-b87c45988-ctlm8 -- mysql -ppassword -e "show databases;": exit status 1 (301.590112ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412192609-42006 exec mysql-b87c45988-ctlm8 -- mysql -ppassword -e "show databases;"
functional_test.go:1736: (dbg) Non-zero exit: kubectl --context functional-20220412192609-42006 exec mysql-b87c45988-ctlm8 -- mysql -ppassword -e "show databases;": exit status 1 (207.446802ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1736: (dbg) Run:  kubectl --context functional-20220412192609-42006 exec mysql-b87c45988-ctlm8 -- mysql -ppassword -e "show databases;"
2022/04/12 19:29:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (22.88s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1858: Checking for existence of /etc/test/nested/copy/42006/hosts within VM
functional_test.go:1860: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /etc/test/nested/copy/42006/hosts"
functional_test.go:1865: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/42006.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /etc/ssl/certs/42006.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /usr/share/ca-certificates/42006.pem within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /usr/share/ca-certificates/42006.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1901: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1902: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/420062.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /etc/ssl/certs/420062.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1928: Checking for existence of /usr/share/ca-certificates/420062.pem within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /usr/share/ca-certificates/420062.pem"
functional_test.go:1928: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1929: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:214: (dbg) Run:  kubectl --context functional-20220412192609-42006 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo systemctl is-active docker"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo systemctl is-active docker": exit status 1 (396.248563ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1956: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1956: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo systemctl is-active crio": exit status 1 (421.987632ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 version -o=json --components

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 version -o=json --components: (2.072026551s)
--- PASS: TestFunctional/parallel/Version/components (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2048: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format short
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format short:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.6
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-20220412192609-42006
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format table:
|---------------------------------------------|---------------------------------|---------------|--------|
|                    Image                    |               Tag               |   Image ID    |  Size  |
|---------------------------------------------|---------------------------------|---------------|--------|
| k8s.gcr.io/etcd                             | 3.5.1-0                         | sha256:25f8c7 | 98.9MB |
| k8s.gcr.io/kube-controller-manager          | v1.23.5                         | sha256:b0c9e5 | 30.2MB |
| docker.io/kindest/kindnetd                  | v20210326-1e038dc5              | sha256:6de166 | 54MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                              | sha256:6e38f4 | 9.06MB |
| k8s.gcr.io/kube-proxy                       | v1.23.5                         | sha256:3c53fa | 39.3MB |
| k8s.gcr.io/pause                            | 3.1                             | sha256:da86e6 | 353kB  |
| k8s.gcr.io/pause                            | 3.3                             | sha256:0184c1 | 298kB  |
| k8s.gcr.io/pause                            | latest                          | sha256:350b16 | 72.3kB |
| docker.io/library/minikube-local-cache-test | functional-20220412192609-42006 | sha256:41e073 | 1.74kB |
| docker.io/library/nginx                     | alpine                          | sha256:51696c | 10.2MB |
| k8s.gcr.io/echoserver                       | 1.8                             | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/kube-apiserver                   | v1.23.5                         | sha256:3fc1d6 | 32.6MB |
| k8s.gcr.io/kube-scheduler                   | v1.23.5                         | sha256:884d49 | 15.1MB |
| docker.io/library/nginx                     | latest                          | sha256:12766a | 56.7MB |
| gcr.io/google-containers/addon-resizer      | functional-20220412192609-42006 | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc                    | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/coredns/coredns                  | v1.8.6                          | sha256:a4ca41 | 13.6MB |
| k8s.gcr.io/pause                            | 3.6                             | sha256:6270bb | 302kB  |
|---------------------------------------------|---------------------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format json
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format json:
[{"id":"sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03","repoDigests":["k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e"],"repoTags":["k8s.gcr.io/coredns/coredns:v1.8.6"],"size":"13585107"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502","repoDigests":["docker.io/library/nginx@sha256:5a0df7fb7c8c03e4158ae9974bfbd6a15da2bdfdeded4fb694367ec812325d31"],"repoTags":["docker.io/library/nginx:alpine"],"size":"10171738"},{"id":"sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d","repoDigests":["k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263"],"repoTags":["k8s.gcr.io/etcd:3.5.1-0"],"size":"98888614"},{"id":"sha256:3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f","repoDigests":["k
8s.gcr.io/kube-apiserver@sha256:ddf5bf7196eb534271f9e5d403f4da19838d5610bb5ca191001bde5f32b5492e"],"repoTags":["k8s.gcr.io/kube-apiserver:v1.23.5"],"size":"32603217"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"353405"},{"id":"sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":["k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db"],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"301773"},{"id":"sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb","repoDigests":["docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c"],"repoTags":["docker.io/kindest/kindnetd:v20210326-1e038dc5"],"size":"53960776"},{"id":"sha256:41e0730705daed4718555e14b6a5dad6749609fa63444fed6b53900d9a22e8ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-20220412192609-42006"],"s
ize":"1738"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619","repoDigests":["docker.io/library/nginx@sha256:2275af0f20d71b293916f1958f8497f987b8d8fd8113df54635f2a5915002bf1"],"repoTags":["docker.io/library/nginx:latest"],"size":"56745242"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr
.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874","repoDigests":["k8s.gcr.io/kube-proxy@sha256:c1f625d115fbd9a12eac615653fc81c0edb33b2b5a76d1e09d5daed11fa557c1"],"repoTags":["k8s.gcr.io/kube-proxy:v1.23.5"],"size":"39278412"},{"id":"sha256:884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7","repoDigests":["k8s.gcr.io/kube-scheduler@sha256:489efb65da9edc40bf0911f3e6371e5bb6b8ad8fde1d55193a6cc84c2ef36626"],"repoTags":["k8s.gcr.io/kube-scheduler:v1.23.5"],"size":"15131395"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-20220412192609-42006"],"size":"10823156"},{"id":"sha256:b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae","repoDigests
":["k8s.gcr.io/kube-controller-manager@sha256:cca0fb3532abedcc95c5f64268d54da9ecc56cc4817ff08d0128941cf2b0e1a4"],"repoTags":["k8s.gcr.io/kube-controller-manager:v1.23.5"],"size":"30174093"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format yaml
functional_test.go:261: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls --format yaml:
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03
repoDigests:
- k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e
repoTags:
- k8s.gcr.io/coredns/coredns:v1.8.6
size: "13585107"
- id: sha256:3c53fa8541f95165d3def81704febb85e2e13f90872667f9939dd856dc88e874
repoDigests:
- k8s.gcr.io/kube-proxy@sha256:c1f625d115fbd9a12eac615653fc81c0edb33b2b5a76d1e09d5daed11fa557c1
repoTags:
- k8s.gcr.io/kube-proxy:v1.23.5
size: "39278412"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb
repoDigests:
- docker.io/kindest/kindnetd@sha256:838bc1706e38391aefaa31fd52619fe8e57ad3dfb0d0ff414d902367fcc24c3c
repoTags:
- docker.io/kindest/kindnetd:v20210326-1e038dc5
size: "53960776"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
size: "10823156"
- id: sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests:
- k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db
repoTags:
- k8s.gcr.io/pause:3.6
size: "301773"
- id: sha256:12766a6745eea133de9fdcd03ff720fa971fdaf21113d4bc72b417c123b15619
repoDigests:
- docker.io/library/nginx@sha256:2275af0f20d71b293916f1958f8497f987b8d8fd8113df54635f2a5915002bf1
repoTags:
- docker.io/library/nginx:latest
size: "56745242"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:25f8c7f3da61c2a810effe5fa779cf80ca171afb0adf94c7cb51eb9a8546629d
repoDigests:
- k8s.gcr.io/etcd@sha256:64b9ea357325d5db9f8a723dcf503b5a449177b17ac87d69481e126bb724c263
repoTags:
- k8s.gcr.io/etcd:3.5.1-0
size: "98888614"
- id: sha256:3fc1d62d65872296462b198ab7842d0faf8c336b236c4a0dacfce67bec95257f
repoDigests:
- k8s.gcr.io/kube-apiserver@sha256:ddf5bf7196eb534271f9e5d403f4da19838d5610bb5ca191001bde5f32b5492e
repoTags:
- k8s.gcr.io/kube-apiserver:v1.23.5
size: "32603217"
- id: sha256:b0c9e5e4dbb14459edc593b39add54f5497e42d4eecc8d03bee5daf9537b0dae
repoDigests:
- k8s.gcr.io/kube-controller-manager@sha256:cca0fb3532abedcc95c5f64268d54da9ecc56cc4817ff08d0128941cf2b0e1a4
repoTags:
- k8s.gcr.io/kube-controller-manager:v1.23.5
size: "30174093"
- id: sha256:884d49d6d8c9f40672d20c78e300ffee238d01c1ccb2c132937125d97a596fd7
repoDigests:
- k8s.gcr.io/kube-scheduler@sha256:489efb65da9edc40bf0911f3e6371e5bb6b8ad8fde1d55193a6cc84c2ef36626
repoTags:
- k8s.gcr.io/kube-scheduler:v1.23.5
size: "15131395"
- id: sha256:41e0730705daed4718555e14b6a5dad6749609fa63444fed6b53900d9a22e8ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-20220412192609-42006
size: "1738"
- id: sha256:51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
repoDigests:
- docker.io/library/nginx@sha256:5a0df7fb7c8c03e4158ae9974bfbd6a15da2bdfdeded4fb694367ec812325d31
repoTags:
- docker.io/library/nginx:alpine
size: "10171738"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "353405"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:303: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh pgrep buildkitd: exit status 1 (472.938704ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image build -t localhost/my-image:functional-20220412192609-42006 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 image build -t localhost/my-image:functional-20220412192609-42006 testdata/build: (4.330364671s)
functional_test.go:318: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20220412192609-42006 image build -t localhost/my-image:functional-20220412192609-42006 testdata/build:
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 1.3s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 1.3s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.2s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#4 DONE 0.2s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:fe4a1614431a76c8dd534113dec6b46667c8c14639616c7dfd33e640ad416df1 0.0s done
#8 exporting config sha256:bc228b210626c807ca86215a6c1358455b82d8c92f480072a9b9b97f2bfe00d3 0.1s done
#8 naming to localhost/my-image:functional-20220412192609-42006 done
#8 DONE 0.4s
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:337: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.502390304s)
functional_test.go:342: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220412192609-42006 /tmp/TestFunctionalparallelMountCmdany-port1197615378/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1649791694139054184" to /tmp/TestFunctionalparallelMountCmdany-port1197615378/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1649791694139054184" to /tmp/TestFunctionalparallelMountCmdany-port1197615378/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1649791694139054184" to /tmp/TestFunctionalparallelMountCmdany-port1197615378/001/test-1649791694139054184
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (419.130367ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 12 19:28 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 12 19:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 12 19:28 test-1649791694139054184
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh cat /mount-9p/test-1649791694139054184
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-20220412192609-42006 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [a5aa2d9d-469c-45ab-80a2-80501e1c30c7] Pending
helpers_test.go:342: "busybox-mount" [a5aa2d9d-469c-45ab-80a2-80501e1c30c7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [a5aa2d9d-469c-45ab-80a2-80501e1c30c7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [a5aa2d9d-469c-45ab-80a2-80501e1c30c7] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006266162s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-20220412192609-42006 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220412192609-42006 /tmp/TestFunctionalparallelMountCmdany-port1197615378/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20220412192609-42006 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-20220412192609-42006 apply -f testdata/testsvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [ceeab06e-b37b-405c-b0ce-61d45093a515] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [ceeab06e-b37b-405c-b0ce-61d45093a515] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [ceeab06e-b37b-405c-b0ce-61d45093a515] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.0615476s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:350: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006: (4.285035958s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:360: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006: (4.445402281s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20220412192609-42006 /tmp/TestFunctionalparallelMountCmdspecific-port148726577/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "findmnt -T /mount-9p | grep 9p"
E0412 19:28:23.153804   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (410.497557ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220412192609-42006 /tmp/TestFunctionalparallelMountCmdspecific-port148726577/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh "sudo umount -f /mount-9p": exit status 1 (393.594834ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-linux-amd64 -p functional-20220412192609-42006 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20220412192609-42006 /tmp/TestFunctionalparallelMountCmdspecific-port148726577/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:230: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
functional_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:240: (dbg) Done: out/minikube-linux-amd64 -p functional-20220412192609-42006 image load --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006: (5.375876624s)
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-20220412192609-42006 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://10.97.168.100 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-linux-amd64 -p functional-20220412192609-42006 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image save gcr.io/google-containers/addon-resizer:functional-20220412192609-42006 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image rm gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:404: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:443: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:414: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
functional_test.go:419: (dbg) Run:  out/minikube-linux-amd64 -p functional-20220412192609-42006 image save --daemon gcr.io/google-containers/addon-resizer:functional-20220412192609-42006

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:424: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "370.17878ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "64.074214ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1364: Took "386.659298ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "73.930674ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:185: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-20220412192609-42006
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:193: (dbg) Run:  docker rmi -f localhost/my-image:functional-20220412192609-42006
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:201: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20220412192609-42006
--- PASS: TestFunctional/delete_minikube_cached_images (0.03s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (89.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-20220412192911-42006 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0412 19:29:24.595566   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-20220412192911-42006 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m29.729993548s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (89.73s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons enable ingress --alsologtostderr -v=5
E0412 19:30:46.516229   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons enable ingress --alsologtostderr -v=5: (13.201686374s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (38.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:162: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412192911-42006 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:162: (dbg) Done: kubectl --context ingress-addon-legacy-20220412192911-42006 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.840797031s)
addons_test.go:182: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412192911-42006 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:195: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412192911-42006 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [467e8c0f-ee8a-4f0e-bae9-95e80fd4b999] Pending
helpers_test.go:342: "nginx" [467e8c0f-ee8a-4f0e-bae9-95e80fd4b999] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:342: "nginx" [467e8c0f-ee8a-4f0e-bae9-95e80fd4b999] Running
addons_test.go:200: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.035407198s
addons_test.go:212: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:236: (dbg) Run:  kubectl --context ingress-addon-legacy-20220412192911-42006 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 ip
addons_test.go:247: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons disable ingress-dns --alsologtostderr -v=1: (2.020219486s)
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons disable ingress --alsologtostderr -v=1
addons_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-20220412192911-42006 addons disable ingress --alsologtostderr -v=1: (7.34315007s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (38.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (88.14s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-20220412193136-42006 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0412 19:33:02.670279   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-20220412193136-42006 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m28.141905639s)
--- PASS: TestJSONOutput/start/Command (88.14s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-20220412193136-42006 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-20220412193136-42006 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (15.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-20220412193136-42006 --output=json --user=testUser
E0412 19:33:14.515164   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:14.520460   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:14.530764   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:14.551126   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:14.591490   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:14.671941   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:14.832413   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:15.153029   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:15.793980   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:17.074563   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:19.635341   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-20220412193136-42006 --output=json --user=testUser: (15.768180005s)
--- PASS: TestJSONOutput/stop/Command (15.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.3s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20220412193326-42006 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20220412193326-42006 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.503735ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3598aa14-7e9f-4298-bac5-ae8e15bcaead","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-20220412193326-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b326284-4b2c-4cc8-a426-783f7f280362","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13812"}}
	{"specversion":"1.0","id":"f872eed8-9be9-4730-9c27-60043ecd4339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4e843d22-9d94-483c-b6b2-dc93121d3f80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig"}}
	{"specversion":"1.0","id":"6ea979b1-4579-410d-8f96-e15e79bfc9d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube"}}
	{"specversion":"1.0","id":"4b9ebb0e-5a7d-4c8d-9454-a399d3d4db8a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8e2f88e1-6a45-4895-96d4-e45eb8e54b68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-20220412193326-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20220412193326-42006
--- PASS: TestErrorJSONOutput (0.30s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.06s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220412193327-42006 --network=
E0412 19:33:30.357931   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:33:34.997021   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:33:55.477540   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220412193327-42006 --network=: (30.759622046s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220412193327-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220412193327-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220412193327-42006: (2.26778096s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.06s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (27.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20220412193400-42006 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20220412193400-42006 --network=bridge: (25.190188059s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-20220412193400-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20220412193400-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20220412193400-42006: (2.159951253s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (27.39s)

                                                
                                    
x
+
TestKicExistingNetwork (28.85s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20220412193427-42006 --network=existing-network
E0412 19:34:36.438756   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20220412193427-42006 --network=existing-network: (26.281566185s)
helpers_test.go:175: Cleaning up "existing-network-20220412193427-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20220412193427-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20220412193427-42006: (2.342212844s)
--- PASS: TestKicExistingNetwork (28.85s)

                                                
                                    
x
+
TestKicCustomSubnet (28.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-20220412193456-42006 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-20220412193456-42006 --subnet=192.168.60.0/24: (26.141046491s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-20220412193456-42006 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-20220412193456-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-20220412193456-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-20220412193456-42006: (2.292739179s)
--- PASS: TestKicCustomSubnet (28.47s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-20220412193524-42006 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-20220412193524-42006 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.043424898s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-20220412193524-42006 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220412193524-42006 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220412193524-42006 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (3.963086998s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220412193524-42006 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.34s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.88s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-20220412193524-42006 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-20220412193524-42006 --alsologtostderr -v=5: (1.879575391s)
--- PASS: TestMountStart/serial/DeleteFirst (1.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220412193524-42006 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.34s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-20220412193524-42006
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-20220412193524-42006: (1.268659274s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-20220412193524-42006
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-20220412193524-42006: (5.439308927s)
--- PASS: TestMountStart/serial/RestartStopped (6.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-20220412193524-42006 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (102.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412193547-42006 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0412 19:35:54.808022   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:54.813371   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:54.823688   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:54.844033   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:54.884387   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:54.964756   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:55.125268   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:55.445921   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:56.086884   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:57.367350   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:35:58.359791   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:35:59.928446   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:36:05.049604   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:36:15.290701   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:36:35.771434   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:37:16.732315   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412193547-42006 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m41.966649452s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (102.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- rollout status deployment/busybox: (3.01012367s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-pfwvv -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-ws778 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-pfwvv -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-ws778 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-pfwvv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-ws778 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-pfwvv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-pfwvv -- sh -c "ping -c 1 192.168.49.1"
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-ws778 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20220412193547-42006 -- exec busybox-7978565885-ws778 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.84s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220412193547-42006 -v 3 --alsologtostderr
E0412 19:38:02.669350   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:38:14.515431   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20220412193547-42006 -v 3 --alsologtostderr: (40.918053145s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.71s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (12.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp testdata/cp-test.txt multinode-20220412193547-42006:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2585028438/001/cp-test_multinode-20220412193547-42006.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006:/home/docker/cp-test.txt multinode-20220412193547-42006-m02:/home/docker/cp-test_multinode-20220412193547-42006_multinode-20220412193547-42006-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m02 "sudo cat /home/docker/cp-test_multinode-20220412193547-42006_multinode-20220412193547-42006-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006:/home/docker/cp-test.txt multinode-20220412193547-42006-m03:/home/docker/cp-test_multinode-20220412193547-42006_multinode-20220412193547-42006-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m03 "sudo cat /home/docker/cp-test_multinode-20220412193547-42006_multinode-20220412193547-42006-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp testdata/cp-test.txt multinode-20220412193547-42006-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2585028438/001/cp-test_multinode-20220412193547-42006-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006-m02:/home/docker/cp-test.txt multinode-20220412193547-42006:/home/docker/cp-test_multinode-20220412193547-42006-m02_multinode-20220412193547-42006.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006 "sudo cat /home/docker/cp-test_multinode-20220412193547-42006-m02_multinode-20220412193547-42006.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006-m02:/home/docker/cp-test.txt multinode-20220412193547-42006-m03:/home/docker/cp-test_multinode-20220412193547-42006-m02_multinode-20220412193547-42006-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m03 "sudo cat /home/docker/cp-test_multinode-20220412193547-42006-m02_multinode-20220412193547-42006-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp testdata/cp-test.txt multinode-20220412193547-42006-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2585028438/001/cp-test_multinode-20220412193547-42006-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006-m03:/home/docker/cp-test.txt multinode-20220412193547-42006:/home/docker/cp-test_multinode-20220412193547-42006-m03_multinode-20220412193547-42006.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006 "sudo cat /home/docker/cp-test_multinode-20220412193547-42006-m03_multinode-20220412193547-42006.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 cp multinode-20220412193547-42006-m03:/home/docker/cp-test.txt multinode-20220412193547-42006-m02:/home/docker/cp-test_multinode-20220412193547-42006-m03_multinode-20220412193547-42006-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 ssh -n multinode-20220412193547-42006-m02 "sudo cat /home/docker/cp-test_multinode-20220412193547-42006-m03_multinode-20220412193547-42006-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (12.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (7.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412193547-42006 node stop m03: (5.793334554s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412193547-42006 status: exit status 7 (623.813043ms)

                                                
                                                
-- stdout --
	multinode-20220412193547-42006
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220412193547-42006-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220412193547-42006-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr: exit status 7 (623.835605ms)

                                                
                                                
-- stdout --
	multinode-20220412193547-42006
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20220412193547-42006-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20220412193547-42006-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:38:36.883901  122735 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:38:36.884014  122735 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:38:36.884022  122735 out.go:310] Setting ErrFile to fd 2...
	I0412 19:38:36.884027  122735 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:38:36.884168  122735 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:38:36.884382  122735 out.go:304] Setting JSON to false
	I0412 19:38:36.884411  122735 mustload.go:65] Loading cluster: multinode-20220412193547-42006
	I0412 19:38:36.884740  122735 config.go:178] Loaded profile config "multinode-20220412193547-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:38:36.884759  122735 status.go:253] checking status of multinode-20220412193547-42006 ...
	I0412 19:38:36.885176  122735 cli_runner.go:164] Run: docker container inspect multinode-20220412193547-42006 --format={{.State.Status}}
	I0412 19:38:36.919606  122735 status.go:328] multinode-20220412193547-42006 host status = "Running" (err=<nil>)
	I0412 19:38:36.919637  122735 host.go:66] Checking if "multinode-20220412193547-42006" exists ...
	I0412 19:38:36.919898  122735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220412193547-42006
	I0412 19:38:36.952168  122735 host.go:66] Checking if "multinode-20220412193547-42006" exists ...
	I0412 19:38:36.952455  122735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:38:36.952508  122735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220412193547-42006
	I0412 19:38:36.987091  122735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49217 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/multinode-20220412193547-42006/id_rsa Username:docker}
	I0412 19:38:37.072749  122735 ssh_runner.go:195] Run: systemctl --version
	I0412 19:38:37.076558  122735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:38:37.086496  122735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:38:37.183055  122735 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:44 SystemTime:2022-04-12 19:38:37.116435577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clie
ntInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:38:37.183882  122735 kubeconfig.go:92] found "multinode-20220412193547-42006" server: "https://192.168.49.2:8443"
	I0412 19:38:37.183919  122735 api_server.go:165] Checking apiserver status ...
	I0412 19:38:37.183966  122735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0412 19:38:37.194231  122735 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1156/cgroup
	I0412 19:38:37.202750  122735 api_server.go:181] apiserver freezer: "10:freezer:/docker/e10ec48c7980c8cba32f7a5198c8c907a70a0fd2bdfd80fae22a5fd6b914c706/kubepods/burstable/pode6399423c61183223bd3f97b343b2ef9/6826b0438fd94cc0aae2a961ba84f6c5a526189c5e888fc70b65ac5987d7a824"
	I0412 19:38:37.202830  122735 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e10ec48c7980c8cba32f7a5198c8c907a70a0fd2bdfd80fae22a5fd6b914c706/kubepods/burstable/pode6399423c61183223bd3f97b343b2ef9/6826b0438fd94cc0aae2a961ba84f6c5a526189c5e888fc70b65ac5987d7a824/freezer.state
	I0412 19:38:37.209682  122735 api_server.go:203] freezer state: "THAWED"
	I0412 19:38:37.209719  122735 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0412 19:38:37.214397  122735 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0412 19:38:37.214420  122735 status.go:419] multinode-20220412193547-42006 apiserver status = Running (err=<nil>)
	I0412 19:38:37.214434  122735 status.go:255] multinode-20220412193547-42006 status: &{Name:multinode-20220412193547-42006 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0412 19:38:37.214463  122735 status.go:253] checking status of multinode-20220412193547-42006-m02 ...
	I0412 19:38:37.214739  122735 cli_runner.go:164] Run: docker container inspect multinode-20220412193547-42006-m02 --format={{.State.Status}}
	I0412 19:38:37.248249  122735 status.go:328] multinode-20220412193547-42006-m02 host status = "Running" (err=<nil>)
	I0412 19:38:37.248278  122735 host.go:66] Checking if "multinode-20220412193547-42006-m02" exists ...
	I0412 19:38:37.248651  122735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20220412193547-42006-m02
	I0412 19:38:37.282033  122735 host.go:66] Checking if "multinode-20220412193547-42006-m02" exists ...
	I0412 19:38:37.282297  122735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0412 19:38:37.282336  122735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20220412193547-42006-m02
	I0412 19:38:37.317042  122735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49222 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/machines/multinode-20220412193547-42006-m02/id_rsa Username:docker}
	I0412 19:38:37.400770  122735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0412 19:38:37.410458  122735 status.go:255] multinode-20220412193547-42006-m02 status: &{Name:multinode-20220412193547-42006-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0412 19:38:37.410509  122735 status.go:253] checking status of multinode-20220412193547-42006-m03 ...
	I0412 19:38:37.410877  122735 cli_runner.go:164] Run: docker container inspect multinode-20220412193547-42006-m03 --format={{.State.Status}}
	I0412 19:38:37.444222  122735 status.go:328] multinode-20220412193547-42006-m03 host status = "Stopped" (err=<nil>)
	I0412 19:38:37.444247  122735 status.go:341] host is not running, skipping remaining checks
	I0412 19:38:37.444255  122735 status.go:255] multinode-20220412193547-42006-m03 status: &{Name:multinode-20220412193547-42006-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (7.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 node start m03 --alsologtostderr
E0412 19:38:38.653271   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:38:42.200372   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412193547-42006 node start m03 --alsologtostderr: (35.154154542s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (174.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220412193547-42006
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20220412193547-42006
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20220412193547-42006: (45.883020377s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412193547-42006 --wait=true -v=8 --alsologtostderr
E0412 19:40:54.807448   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
E0412 19:41:22.493641   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412193547-42006 --wait=true -v=8 --alsologtostderr: (2m8.461832718s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220412193547-42006
--- PASS: TestMultiNode/serial/RestartKeepsNodes (174.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (9.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412193547-42006 node delete m03: (9.033432901s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (9.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (40.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 stop
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-20220412193547-42006 stop: (40.258622642s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412193547-42006 status: exit status 7 (130.60064ms)

                                                
                                                
-- stdout --
	multinode-20220412193547-42006
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220412193547-42006-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr: exit status 7 (127.610209ms)

                                                
                                                
-- stdout --
	multinode-20220412193547-42006
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20220412193547-42006-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:42:58.273350  133323 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:42:58.273477  133323 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:42:58.273486  133323 out.go:310] Setting ErrFile to fd 2...
	I0412 19:42:58.273490  133323 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:42:58.273606  133323 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:42:58.273765  133323 out.go:304] Setting JSON to false
	I0412 19:42:58.273787  133323 mustload.go:65] Loading cluster: multinode-20220412193547-42006
	I0412 19:42:58.274153  133323 config.go:178] Loaded profile config "multinode-20220412193547-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.23.5
	I0412 19:42:58.274171  133323 status.go:253] checking status of multinode-20220412193547-42006 ...
	I0412 19:42:58.274579  133323 cli_runner.go:164] Run: docker container inspect multinode-20220412193547-42006 --format={{.State.Status}}
	I0412 19:42:58.308097  133323 status.go:328] multinode-20220412193547-42006 host status = "Stopped" (err=<nil>)
	I0412 19:42:58.308127  133323 status.go:341] host is not running, skipping remaining checks
	I0412 19:42:58.308134  133323 status.go:255] multinode-20220412193547-42006 status: &{Name:multinode-20220412193547-42006 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0412 19:42:58.308166  133323 status.go:253] checking status of multinode-20220412193547-42006-m02 ...
	I0412 19:42:58.308465  133323 cli_runner.go:164] Run: docker container inspect multinode-20220412193547-42006-m02 --format={{.State.Status}}
	I0412 19:42:58.340956  133323 status.go:328] multinode-20220412193547-42006-m02 host status = "Stopped" (err=<nil>)
	I0412 19:42:58.340987  133323 status.go:341] host is not running, skipping remaining checks
	I0412 19:42:58.340994  133323 status.go:255] multinode-20220412193547-42006-m02 status: &{Name:multinode-20220412193547-42006-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (40.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (113.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412193547-42006 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0412 19:43:02.669735   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:43:14.515715   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 19:44:25.719230   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412193547-42006 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m52.739415704s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20220412193547-42006 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (113.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (42.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20220412193547-42006
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412193547-42006-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20220412193547-42006-m02 --driver=docker  --container-runtime=containerd: exit status 14 (76.545288ms)

                                                
                                                
-- stdout --
	* [multinode-20220412193547-42006-m02] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20220412193547-42006-m02' is duplicated with machine name 'multinode-20220412193547-42006-m02' in profile 'multinode-20220412193547-42006'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20220412193547-42006-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20220412193547-42006-m03 --driver=docker  --container-runtime=containerd: (39.785861615s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20220412193547-42006
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20220412193547-42006: exit status 80 (354.629821ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20220412193547-42006
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20220412193547-42006-m03 already exists in multinode-20220412193547-42006-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20220412193547-42006-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20220412193547-42006-m03: (2.680862865s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (42.96s)

                                                
                                    
x
+
TestPreload (129.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220412194539-42006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0412 19:45:54.807343   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220412194539-42006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m23.703074735s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220412194539-42006 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20220412194539-42006 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (1.853123661s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20220412194539-42006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20220412194539-42006 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (40.647242551s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20220412194539-42006 -- sudo crictl image ls
helpers_test.go:175: Cleaning up "test-preload-20220412194539-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20220412194539-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20220412194539-42006: (2.582823761s)
--- PASS: TestPreload (129.16s)

                                                
                                    
x
+
TestScheduledStopUnix (117.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20220412194748-42006 --memory=2048 --driver=docker  --container-runtime=containerd
E0412 19:48:02.670292   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:48:14.515653   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20220412194748-42006 --memory=2048 --driver=docker  --container-runtime=containerd: (40.203648039s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412194748-42006 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20220412194748-42006 -n scheduled-stop-20220412194748-42006
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412194748-42006 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412194748-42006 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220412194748-42006 -n scheduled-stop-20220412194748-42006
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220412194748-42006
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20220412194748-42006 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0412 19:49:37.562361   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20220412194748-42006
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20220412194748-42006: exit status 7 (93.406798ms)

                                                
                                                
-- stdout --
	scheduled-stop-20220412194748-42006
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220412194748-42006 -n scheduled-stop-20220412194748-42006
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20220412194748-42006 -n scheduled-stop-20220412194748-42006: exit status 7 (93.718102ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-20220412194748-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20220412194748-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20220412194748-42006: (5.202109949s)
--- PASS: TestScheduledStopUnix (117.18s)

                                                
                                    
x
+
TestInsufficientStorage (17.67s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20220412194945-42006 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20220412194945-42006 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.847528303s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c62d6de-e686-4ea5-97d2-803e0065ce51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-20220412194945-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9046e1ce-0161-46d3-9f8c-ffc69b3f92fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=13812"}}
	{"specversion":"1.0","id":"eb9d0a1e-74ef-4d3a-bd4f-14b4fe654295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b67cb010-2198-43c5-bda7-3a0689f7b0a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig"}}
	{"specversion":"1.0","id":"d8801b99-0d98-4e42-8832-48b31b4575e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube"}}
	{"specversion":"1.0","id":"11c77f57-dd63-4645-a02d-14a27eff86fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dcb1b7c9-b03a-47b2-88a3-c9f0f8058eb6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"516e5ebc-0775-45f9-92ce-b20e094578a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c2da9c05-804f-4fbf-8d87-056e21e7fd63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"39d35aaa-57ad-4dea-83d8-88eec6176593","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"Your cgroup does not allow setting memory."}}
	{"specversion":"1.0","id":"63d4a6c4-0bee-405b-a1e5-80bdb6a4f4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"}}
	{"specversion":"1.0","id":"8fc0b539-c45b-4031-baa9-02e42c19c383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with the root privilege"}}
	{"specversion":"1.0","id":"f5cd9d31-28bd-4467-8457-663364c427a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20220412194945-42006 in cluster insufficient-storage-20220412194945-42006","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f67ff2d8-1daf-459a-b05a-f43610e7f1a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3966ebe0-f060-4193-b62b-9312f12dd7b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"47792120-d521-426d-a8c8-1b8cc9373a2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220412194945-42006 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220412194945-42006 --output=json --layout=cluster: exit status 7 (362.669557ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220412194945-42006","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220412194945-42006","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0412 19:49:56.830943  153395 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220412194945-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20220412194945-42006 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20220412194945-42006 --output=json --layout=cluster: exit status 7 (359.090311ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20220412194945-42006","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.25.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20220412194945-42006","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0412 19:49:57.190238  153496 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20220412194945-42006" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	E0412 19:49:57.199079  153496 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/insufficient-storage-20220412194945-42006/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-20220412194945-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20220412194945-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20220412194945-42006: (6.10436219s)
--- PASS: TestInsufficientStorage (17.67s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (87.22s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /tmp/minikube-v1.16.0.93457211.exe start -p running-upgrade-20220412195256-42006 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0412 19:53:02.669806   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:53:14.514701   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Done: /tmp/minikube-v1.16.0.93457211.exe start -p running-upgrade-20220412195256-42006 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.613048418s)
version_upgrade_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20220412195256-42006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20220412195256-42006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.771819249s)
helpers_test.go:175: Cleaning up "running-upgrade-20220412195256-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20220412195256-42006
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20220412195256-42006: (3.348813012s)
--- PASS: TestRunningBinaryUpgrade (87.22s)

                                                
                                    
x
+
TestKubernetesUpgrade (167.25s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.894351256s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220412195142-42006
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20220412195142-42006: (5.798047096s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20220412195142-42006 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20220412195142-42006 status --format={{.Host}}: exit status 7 (101.431511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.635817366s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-20220412195142-42006 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=containerd: exit status 106 (96.142912ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20220412195142-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.23.6-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20220412195142-42006
	    minikube start -p kubernetes-upgrade-20220412195142-42006 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220412195142-420062 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.23.6-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20220412195142-42006 --kubernetes-version=v1.23.6-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20220412195142-42006 --memory=2200 --kubernetes-version=v1.23.6-rc.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.104575222s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-20220412195142-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220412195142-42006

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20220412195142-42006: (3.551552167s)
--- PASS: TestKubernetesUpgrade (167.25s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.78s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /tmp/minikube-v1.9.1.867752411.exe start -p missing-upgrade-20220412195111-42006 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Done: /tmp/minikube-v1.9.1.867752411.exe start -p missing-upgrade-20220412195111-42006 --memory=2200 --driver=docker  --container-runtime=containerd: (1m18.890049655s)
version_upgrade_test.go:325: (dbg) Run:  docker stop missing-upgrade-20220412195111-42006
version_upgrade_test.go:325: (dbg) Done: docker stop missing-upgrade-20220412195111-42006: (10.293315473s)
version_upgrade_test.go:330: (dbg) Run:  docker rm missing-upgrade-20220412195111-42006
version_upgrade_test.go:336: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20220412195111-42006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:336: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20220412195111-42006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.306201933s)
helpers_test.go:175: Cleaning up "missing-upgrade-20220412195111-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20220412195111-42006

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20220412195111-42006: (3.650104477s)
--- PASS: TestMissingContainerUpgrade (152.78s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (101.647947ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-20220412195003-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (60.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --driver=docker  --container-runtime=containerd: (1m0.433611543s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220412195003-42006 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (60.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (117.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /tmp/minikube-v1.16.0.4077205850.exe start -p stopped-upgrade-20220412195003-42006 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:190: (dbg) Done: /tmp/minikube-v1.16.0.4077205850.exe start -p stopped-upgrade-20220412195003-42006 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.011542053s)
version_upgrade_test.go:199: (dbg) Run:  /tmp/minikube-v1.16.0.4077205850.exe -p stopped-upgrade-20220412195003-42006 stop
version_upgrade_test.go:199: (dbg) Done: /tmp/minikube-v1.16.0.4077205850.exe -p stopped-upgrade-20220412195003-42006 stop: (1.295462598s)
version_upgrade_test.go:205: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-20220412195003-42006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0412 19:50:54.807251   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:205: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-20220412195003-42006 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m9.510737313s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (117.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --no-kubernetes --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.054569959s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-20220412195003-42006 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-20220412195003-42006 status -o json: exit status 2 (499.835255ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-20220412195003-42006","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-20220412195003-42006
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-20220412195003-42006: (3.490562718s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.543942275s)
--- PASS: TestNoKubernetes/serial/Start (4.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220412195003-42006 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220412195003-42006 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.47582ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-20220412195003-42006
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-20220412195003-42006: (1.310274872s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-20220412195003-42006 --driver=docker  --container-runtime=containerd: (6.002646869s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-20220412195003-42006 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-20220412195003-42006 "sudo systemctl is-active --quiet service kubelet": exit status 1 (646.14356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20220412195003-42006
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20220412195202-42006 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/false
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20220412195202-42006 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (257.080778ms)

                                                
                                                
-- stdout --
	* [false-20220412195202-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=13812
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0412 19:52:02.639889  176413 out.go:297] Setting OutFile to fd 1 ...
	I0412 19:52:02.640036  176413 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:52:02.640047  176413 out.go:310] Setting ErrFile to fd 2...
	I0412 19:52:02.640051  176413 out.go:344] TERM=,COLORTERM=, which probably does not support color
	I0412 19:52:02.640192  176413 root.go:322] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/bin
	I0412 19:52:02.640547  176413 out.go:304] Setting JSON to false
	I0412 19:52:02.642225  176413 start.go:115] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":9276,"bootTime":1649783847,"procs":651,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1023-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0412 19:52:02.642301  176413 start.go:125] virtualization: kvm guest
	I0412 19:52:02.645262  176413 out.go:176] * [false-20220412195202-42006] minikube v1.25.2 on Ubuntu 20.04 (kvm/amd64)
	I0412 19:52:02.646971  176413 out.go:176]   - MINIKUBE_LOCATION=13812
	I0412 19:52:02.645496  176413 notify.go:193] Checking for updates...
	I0412 19:52:02.648807  176413 out.go:176]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0412 19:52:02.650410  176413 out.go:176]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/kubeconfig
	I0412 19:52:02.651929  176413 out.go:176]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube
	I0412 19:52:02.655517  176413 out.go:176]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0412 19:52:02.656164  176413 config.go:178] Loaded profile config "kubernetes-upgrade-20220412195142-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	I0412 19:52:02.656267  176413 config.go:178] Loaded profile config "missing-upgrade-20220412195111-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.18.0
	I0412 19:52:02.656448  176413 config.go:178] Loaded profile config "stopped-upgrade-20220412195003-42006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0412 19:52:02.656510  176413 driver.go:346] Setting default libvirt URI to qemu:///system
	I0412 19:52:02.706415  176413 docker.go:137] docker version: linux-20.10.14
	I0412 19:52:02.706578  176413 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0412 19:52:02.813882  176413 info.go:265] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:75 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:true NGoroutines:74 SystemTime:2022-04-12 19:52:02.73888778 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.13.0-1023-gcp OperatingSystem:Ubuntu 20.04.4 LTS OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33662799872 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] Clien
tInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:<nil>}}
	I0412 19:52:02.814031  176413 docker.go:254] overlay module found
	I0412 19:52:02.816592  176413 out.go:176] * Using the docker driver based on user configuration
	I0412 19:52:02.816627  176413 start.go:284] selected driver: docker
	I0412 19:52:02.816634  176413 start.go:801] validating driver "docker" against <nil>
	I0412 19:52:02.816659  176413 start.go:812] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	W0412 19:52:02.816719  176413 oci.go:120] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0412 19:52:02.816740  176413 out.go:241] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0412 19:52:02.818193  176413 out.go:176]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0412 19:52:02.820166  176413 out.go:176] 
	W0412 19:52:02.820310  176413 out.go:241] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0412 19:52:02.821733  176413 out.go:176] 

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "false-20220412195202-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20220412195202-42006
--- PASS: TestNetworkPlugins/group/false (0.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (61.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20220412195201-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20220412195201-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (1m1.385998116s)
--- PASS: TestNetworkPlugins/group/auto/Start (61.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (75.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20220412195203-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20220412195203-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (1m15.388731557s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (75.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20220412195201-42006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20220412195201-42006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-rvdhp" [b271078b-3d95-45f1-a19a-6e2298457d2c] Pending
helpers_test.go:342: "netcat-668db85669-rvdhp" [b271078b-3d95-45f1-a19a-6e2298457d2c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-rvdhp" [b271078b-3d95-45f1-a19a-6e2298457d2c] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.007840366s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20220412195201-42006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20220412195201-42006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20220412195201-42006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (82.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20220412195203-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd
E0412 19:55:54.808038   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20220412195203-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (1m22.6182928s)
--- PASS: TestNetworkPlugins/group/cilium/Start (82.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20220412195203-42006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (8.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20220412195203-42006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-nrhzw" [050395ab-06cf-41f6-bbb1-d1088e33ccfe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-nrhzw" [050395ab-06cf-41f6-bbb1-d1088e33ccfe] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 8.032341652s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (8.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-47d4p" [8cdf532d-2c12-4446-adc2-d770d3bb581c] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.013912243s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20220412195203-42006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20220412195203-42006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-pfn2l" [f4a2a08e-0bcd-4827-a674-13fe29e7da78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-pfn2l" [f4a2a08e-0bcd-4827-a674-13fe29e7da78] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 9.007140569s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20220412195203-42006 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20220412195203-42006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20220412195203-42006 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20220412195202-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0412 19:58:02.669808   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 19:58:14.514740   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20220412195202-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m1.669502693s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20220412195202-42006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20220412195202-42006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-wcmv5" [87efcc68-696f-4a4c-9b98-5e256912d0bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-wcmv5" [87efcc68-696f-4a4c-9b98-5e256912d0bd] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.042913727s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (315.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20220412195202-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0412 20:02:51.329149   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20220412195202-42006 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (5m15.558130587s)
--- PASS: TestNetworkPlugins/group/bridge/Start (315.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220412200453-42006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0
E0412 20:04:54.210963   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220412200453-42006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0: (1m14.075511313s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context no-preload-20220412200453-42006 create -f testdata/busybox.yaml
start_stop_delete_test.go:180: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5301eee2-9094-465e-8dce-3df28b445d7c] Pending
helpers_test.go:342: "busybox" [5301eee2-9094-465e-8dce-3df28b445d7c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5301eee2-9094-465e-8dce-3df28b445d7c] Running
start_stop_delete_test.go:180: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.011390194s
start_stop_delete_test.go:180: (dbg) Run:  kubectl --context no-preload-20220412200453-42006 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20220412200453-42006 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context no-preload-20220412200453-42006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20220412200453-42006 --alsologtostderr -v=3
E0412 20:06:17.562817   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:06:25.945942   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/custom-weave-20220412195203-42006/client.crt: no such file or directory
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20220412200453-42006 --alsologtostderr -v=3: (20.246235716s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006: exit status 7 (100.273732ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20220412200453-42006 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (325.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20220412200453-42006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0
E0412 20:07:10.367045   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
E0412 20:07:38.051970   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20220412200453-42006 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0: (5m24.69768344s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (325.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20220412195202-42006 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20220412195202-42006 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-668db85669-hksd2" [5e967bd3-089d-44fb-a0c1-a3e5d1bf2bcb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-668db85669-hksd2" [5e967bd3-089d-44fb-a0c1-a3e5d1bf2bcb] Running
E0412 20:08:02.669665   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.070351439s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:258: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hf4hs" [fb2eab63-86a7-4b5b-b6f1-3d5ce1368699] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hf4hs" [fb2eab63-86a7-4b5b-b6f1-3d5ce1368699] Running
E0412 20:12:10.367087   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/cilium-20220412195203-42006/client.crt: no such file or directory
start_stop_delete_test.go:258: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.013036098s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:271: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-8469778f77-hf4hs" [fb2eab63-86a7-4b5b-b6f1-3d5ce1368699] Running
start_stop_delete_test.go:271: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007494883s
start_stop_delete_test.go:275: (dbg) Run:  kubectl --context no-preload-20220412200453-42006 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20220412200453-42006 "sudo crictl images -o json"
start_stop_delete_test.go:288: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:288: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20220412200453-42006 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006: exit status 2 (399.696418ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006: exit status 2 (405.288279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20220412200453-42006 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20220412200453-42006 -n no-preload-20220412200453-42006
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (54.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:170: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220412201253-42006 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0
E0412 20:12:58.178137   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.183453   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.193751   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.214092   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.254440   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.334809   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.495240   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:58.815595   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:12:59.658311   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:13:00.939481   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:13:02.669525   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/addons-20220412192056-42006/client.crt: no such file or directory
E0412 20:13:03.500548   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:13:08.621774   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:13:14.515343   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
E0412 20:13:18.862063   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
E0412 20:13:31.519359   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
E0412 20:13:39.342215   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
start_stop_delete_test.go:170: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220412201253-42006 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0: (54.089401325s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (54.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20220412201253-42006 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:195: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (20.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20220412201253-42006 --alsologtostderr -v=3
E0412 20:13:59.204761   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/enable-default-cni-20220412195202-42006/client.crt: no such file or directory
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20220412201253-42006 --alsologtostderr -v=3: (20.240537881s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (20.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006: exit status 7 (100.567726ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20220412201253-42006 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (34.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20220412201253-42006 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0
E0412 20:14:20.302708   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/bridge-20220412195202-42006/client.crt: no such file or directory
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20220412201253-42006 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.23.6-rc.0: (33.862628799s)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (34.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:268: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:288: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20220412201253-42006 "sudo crictl images -o json"
start_stop_delete_test.go:288: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20220412201253-42006 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006: exit status 2 (398.557747ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006
start_stop_delete_test.go:295: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006: exit status 2 (404.493974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:295: status error: exit status 2 (may be ok)
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20220412201253-42006 --alsologtostderr -v=1
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006
start_stop_delete_test.go:295: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20220412201253-42006 -n newest-cni-20220412201253-42006
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20220412200421-42006 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context old-k8s-version-20220412200421-42006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20220412200421-42006 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20220412200421-42006 --alsologtostderr -v=3: (5.92766709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20220412200421-42006 -n old-k8s-version-20220412200421-42006: exit status 7 (101.597553ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20220412200421-42006 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20220412200510-42006 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0412 20:18:14.515667   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/functional-20220412192609-42006/client.crt: no such file or directory
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context embed-certs-20220412200510-42006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20220412200510-42006 --alsologtostderr -v=3
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20220412200510-42006 --alsologtostderr -v=3: (10.598878375s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20220412200510-42006 -n embed-certs-20220412200510-42006: exit status 7 (96.565174ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20220412200510-42006 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:189: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20220412201228-42006 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:199: (dbg) Run:  kubectl --context default-k8s-different-port-20220412201228-42006 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20220412201228-42006 --alsologtostderr -v=3
E0412 20:25:31.558731   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/auto-20220412195201-42006/client.crt: no such file or directory
E0412 20:25:37.857918   42006 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-13812-38647-afb3956fdbde357e4baa0f8617bfd5a64bad6558/.minikube/profiles/ingress-addon-legacy-20220412192911-42006/client.crt: no such file or directory
start_stop_delete_test.go:212: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20220412201228-42006 --alsologtostderr -v=3: (10.367756171s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:223: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006
start_stop_delete_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20220412201228-42006 -n default-k8s-different-port-20220412201228-42006: exit status 7 (97.219872ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:223: status error: exit status 7 (may be ok)
start_stop_delete_test.go:230: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20220412201228-42006 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    

Test skip (25/259)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.5/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.5/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.5/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.5/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.5/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.23.6-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.23.6-rc.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.23.6-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:448: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:455: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:42: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:175: Cleaning up "kubenet-20220412195201-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20220412195201-42006
--- SKIP: TestNetworkPlugins/group/kubenet (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-20220412195202-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20220412195202-42006
--- SKIP: TestNetworkPlugins/group/flannel (0.43s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:102: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-20220412201227-42006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20220412201227-42006
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard